Nov 25 09:00:16 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 25 09:00:16 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 25 09:00:16 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 09:00:16 localhost kernel: BIOS-provided physical RAM map:
Nov 25 09:00:16 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 25 09:00:16 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 25 09:00:16 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 25 09:00:16 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Nov 25 09:00:16 localhost kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Nov 25 09:00:16 localhost kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Nov 25 09:00:16 localhost kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Nov 25 09:00:16 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 25 09:00:16 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 25 09:00:16 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Nov 25 09:00:16 localhost kernel: NX (Execute Disable) protection: active
Nov 25 09:00:16 localhost kernel: APIC: Static calls initialized
Nov 25 09:00:16 localhost kernel: SMBIOS 2.8 present.
Nov 25 09:00:16 localhost kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Nov 25 09:00:16 localhost kernel: Hypervisor detected: KVM
Nov 25 09:00:16 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 25 09:00:16 localhost kernel: kvm-clock: using sched offset of 2673205755 cycles
Nov 25 09:00:16 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 25 09:00:16 localhost kernel: tsc: Detected 2445.406 MHz processor
Nov 25 09:00:16 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 25 09:00:16 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 25 09:00:16 localhost kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Nov 25 09:00:16 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 25 09:00:16 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 25 09:00:16 localhost kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Nov 25 09:00:16 localhost kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Nov 25 09:00:16 localhost kernel: Using GB pages for direct mapping
Nov 25 09:00:16 localhost kernel: RAMDISK: [mem 0x2ed25000-0x3368afff]
Nov 25 09:00:16 localhost kernel: ACPI: Early table checksum verification disabled
Nov 25 09:00:16 localhost kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Nov 25 09:00:16 localhost kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:00:16 localhost kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:00:16 localhost kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:00:16 localhost kernel: ACPI: FACS 0x000000007FFDFC80 000040
Nov 25 09:00:16 localhost kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:00:16 localhost kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:00:16 localhost kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:00:16 localhost kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Nov 25 09:00:16 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Nov 25 09:00:16 localhost kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Nov 25 09:00:16 localhost kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Nov 25 09:00:16 localhost kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Nov 25 09:00:16 localhost kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Nov 25 09:00:16 localhost kernel: No NUMA configuration found
Nov 25 09:00:16 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Nov 25 09:00:16 localhost kernel: NODE_DATA(0) allocated [mem 0x27ffd5000-0x27fffffff]
Nov 25 09:00:16 localhost kernel: crashkernel reserved: 0x000000006f000000 - 0x000000007f000000 (256 MB)
Nov 25 09:00:16 localhost kernel: Zone ranges:
Nov 25 09:00:16 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 25 09:00:16 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 25 09:00:16 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000027fffffff]
Nov 25 09:00:16 localhost kernel:   Device   empty
Nov 25 09:00:16 localhost kernel: Movable zone start for each node
Nov 25 09:00:16 localhost kernel: Early memory node ranges
Nov 25 09:00:16 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 25 09:00:16 localhost kernel:   node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Nov 25 09:00:16 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000027fffffff]
Nov 25 09:00:16 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Nov 25 09:00:16 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 25 09:00:16 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 25 09:00:16 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 25 09:00:16 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 25 09:00:16 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 25 09:00:16 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 25 09:00:16 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 25 09:00:16 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 25 09:00:16 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 25 09:00:16 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 25 09:00:16 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 25 09:00:16 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 25 09:00:16 localhost kernel: TSC deadline timer available
Nov 25 09:00:16 localhost kernel: CPU topo: Max. logical packages:   4
Nov 25 09:00:16 localhost kernel: CPU topo: Max. logical dies:       4
Nov 25 09:00:16 localhost kernel: CPU topo: Max. dies per package:   1
Nov 25 09:00:16 localhost kernel: CPU topo: Max. threads per core:   1
Nov 25 09:00:16 localhost kernel: CPU topo: Num. cores per package:     1
Nov 25 09:00:16 localhost kernel: CPU topo: Num. threads per package:   1
Nov 25 09:00:16 localhost kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Nov 25 09:00:16 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 25 09:00:16 localhost kernel: kvm-guest: KVM setup pv remote TLB flush
Nov 25 09:00:16 localhost kernel: kvm-guest: setup PV sched yield
Nov 25 09:00:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 25 09:00:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 25 09:00:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 25 09:00:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 25 09:00:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Nov 25 09:00:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Nov 25 09:00:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Nov 25 09:00:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Nov 25 09:00:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Nov 25 09:00:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Nov 25 09:00:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 25 09:00:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 25 09:00:16 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 25 09:00:16 localhost kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Nov 25 09:00:16 localhost kernel: Booting paravirtualized kernel on KVM
Nov 25 09:00:16 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 25 09:00:16 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Nov 25 09:00:16 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Nov 25 09:00:16 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u524288 alloc=1*2097152
Nov 25 09:00:16 localhost kernel: pcpu-alloc: [0] 0 1 2 3 
Nov 25 09:00:16 localhost kernel: kvm-guest: PV spinlocks enabled
Nov 25 09:00:16 localhost kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Nov 25 09:00:16 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 09:00:16 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 25 09:00:16 localhost kernel: random: crng init done
Nov 25 09:00:16 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 25 09:00:16 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 25 09:00:16 localhost kernel: Fallback order for Node 0: 0 
Nov 25 09:00:16 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 25 09:00:16 localhost kernel: Policy zone: Normal
Nov 25 09:00:16 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 25 09:00:16 localhost kernel: software IO TLB: area num 4.
Nov 25 09:00:16 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Nov 25 09:00:16 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Nov 25 09:00:16 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 25 09:00:16 localhost kernel: Dynamic Preempt: voluntary
Nov 25 09:00:16 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 25 09:00:16 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 25 09:00:16 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Nov 25 09:00:16 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 25 09:00:16 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 25 09:00:16 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 25 09:00:16 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 25 09:00:16 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Nov 25 09:00:16 localhost kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 25 09:00:16 localhost kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 25 09:00:16 localhost kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 25 09:00:16 localhost kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Nov 25 09:00:16 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 25 09:00:16 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 25 09:00:16 localhost kernel: Console: colour VGA+ 80x25
Nov 25 09:00:16 localhost kernel: printk: console [ttyS0] enabled
Nov 25 09:00:16 localhost kernel: ACPI: Core revision 20230331
Nov 25 09:00:16 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 25 09:00:16 localhost kernel: x2apic enabled
Nov 25 09:00:16 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 25 09:00:16 localhost kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Nov 25 09:00:16 localhost kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Nov 25 09:00:16 localhost kernel: kvm-guest: setup PV IPIs
Nov 25 09:00:16 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 25 09:00:16 localhost kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406)
Nov 25 09:00:16 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 25 09:00:16 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 25 09:00:16 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 25 09:00:16 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 25 09:00:16 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 25 09:00:16 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 25 09:00:16 localhost kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Nov 25 09:00:16 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 25 09:00:16 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 25 09:00:16 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 25 09:00:16 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 25 09:00:16 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 25 09:00:16 localhost kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Nov 25 09:00:16 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 25 09:00:16 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 25 09:00:16 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 25 09:00:16 localhost kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Nov 25 09:00:16 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 25 09:00:16 localhost kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Nov 25 09:00:16 localhost kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Nov 25 09:00:16 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 25 09:00:16 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 25 09:00:16 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 25 09:00:16 localhost kernel: landlock: Up and running.
Nov 25 09:00:16 localhost kernel: Yama: becoming mindful.
Nov 25 09:00:16 localhost kernel: SELinux:  Initializing.
Nov 25 09:00:16 localhost kernel: LSM support for eBPF active
Nov 25 09:00:16 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 09:00:16 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 09:00:16 localhost kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Nov 25 09:00:16 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 25 09:00:16 localhost kernel: ... version:                0
Nov 25 09:00:16 localhost kernel: ... bit width:              48
Nov 25 09:00:16 localhost kernel: ... generic registers:      6
Nov 25 09:00:16 localhost kernel: ... value mask:             0000ffffffffffff
Nov 25 09:00:16 localhost kernel: ... max period:             00007fffffffffff
Nov 25 09:00:16 localhost kernel: ... fixed-purpose events:   0
Nov 25 09:00:16 localhost kernel: ... event mask:             000000000000003f
Nov 25 09:00:16 localhost kernel: signal: max sigframe size: 3376
Nov 25 09:00:16 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 25 09:00:16 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 25 09:00:16 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 25 09:00:16 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 25 09:00:16 localhost kernel: .... node  #0, CPUs:      #1 #2 #3
Nov 25 09:00:16 localhost kernel: smp: Brought up 1 node, 4 CPUs
Nov 25 09:00:16 localhost kernel: smpboot: Total of 4 processors activated (19563.24 BogoMIPS)
Nov 25 09:00:16 localhost kernel: node 0 deferred pages initialised in 9ms
Nov 25 09:00:16 localhost kernel: Memory: 7778916K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 604512K reserved, 0K cma-reserved)
Nov 25 09:00:16 localhost kernel: devtmpfs: initialized
Nov 25 09:00:16 localhost kernel: x86/mm: Memory block size: 128MB
Nov 25 09:00:16 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 25 09:00:16 localhost kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Nov 25 09:00:16 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 25 09:00:16 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 25 09:00:16 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 25 09:00:16 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 25 09:00:16 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 25 09:00:16 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 25 09:00:16 localhost kernel: audit: type=2000 audit(1764061215.708:1): state=initialized audit_enabled=0 res=1
Nov 25 09:00:16 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 25 09:00:16 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 25 09:00:16 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 25 09:00:16 localhost kernel: cpuidle: using governor menu
Nov 25 09:00:16 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 25 09:00:16 localhost kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Nov 25 09:00:16 localhost kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Nov 25 09:00:16 localhost kernel: PCI: Using configuration type 1 for base access
Nov 25 09:00:16 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 25 09:00:16 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 25 09:00:16 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 25 09:00:16 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 25 09:00:16 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 25 09:00:16 localhost kernel: Demotion targets for Node 0: null
Nov 25 09:00:16 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 25 09:00:16 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 25 09:00:16 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 25 09:00:16 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 25 09:00:16 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 25 09:00:16 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 25 09:00:16 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 25 09:00:16 localhost kernel: ACPI: Interpreter enabled
Nov 25 09:00:16 localhost kernel: ACPI: PM: (supports S0 S5)
Nov 25 09:00:16 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 25 09:00:16 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 25 09:00:16 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 25 09:00:16 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Nov 25 09:00:16 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 25 09:00:16 localhost kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 25 09:00:16 localhost kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Nov 25 09:00:16 localhost kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Nov 25 09:00:16 localhost kernel: PCI host bridge to bus 0000:00
Nov 25 09:00:16 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Nov 25 09:00:16 localhost kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 25 09:00:16 localhost kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:16 localhost kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 25 09:00:16 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Nov 25 09:00:16 localhost kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Nov 25 09:00:16 localhost kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Nov 25 09:00:16 localhost kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Nov 25 09:00:16 localhost kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Nov 25 09:00:16 localhost kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Nov 25 09:00:16 localhost kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Nov 25 09:00:16 localhost kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Nov 25 09:00:16 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 25 09:00:16 localhost kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Nov 25 09:00:16 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:02: extended config space not accessible
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [1] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [2] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [3] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [4] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [5] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [6] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [7] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [8] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [9] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [10] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [11] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [12] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [13] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [14] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [15] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [16] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [17] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [18] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [19] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [20] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [21] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [22] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [23] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [24] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [25] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [26] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [27] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [28] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [29] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [30] registered
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [31] registered
Nov 25 09:00:16 localhost kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 25 09:00:16 localhost kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Nov 25 09:00:16 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-2] registered
Nov 25 09:00:16 localhost kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Nov 25 09:00:16 localhost kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Nov 25 09:00:16 localhost kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-3] registered
Nov 25 09:00:16 localhost kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Nov 25 09:00:16 localhost kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Nov 25 09:00:16 localhost kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-4] registered
Nov 25 09:00:16 localhost kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Nov 25 09:00:16 localhost kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-5] registered
Nov 25 09:00:16 localhost kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Nov 25 09:00:16 localhost kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-6] registered
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-7] registered
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-8] registered
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-9] registered
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-10] registered
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-11] registered
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-12] registered
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-13] registered
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-14] registered
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-15] registered
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-16] registered
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 25 09:00:16 localhost kernel: acpiphp: Slot [0-17] registered
Nov 25 09:00:16 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Nov 25 09:00:16 localhost kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Nov 25 09:00:16 localhost kernel: iommu: Default domain type: Translated
Nov 25 09:00:16 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 25 09:00:16 localhost kernel: SCSI subsystem initialized
Nov 25 09:00:16 localhost kernel: ACPI: bus type USB registered
Nov 25 09:00:16 localhost kernel: usbcore: registered new interface driver usbfs
Nov 25 09:00:16 localhost kernel: usbcore: registered new interface driver hub
Nov 25 09:00:16 localhost kernel: usbcore: registered new device driver usb
Nov 25 09:00:16 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 25 09:00:16 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 25 09:00:16 localhost kernel: PTP clock support registered
Nov 25 09:00:16 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 25 09:00:16 localhost kernel: NetLabel: Initializing
Nov 25 09:00:16 localhost kernel: NetLabel:  domain hash size = 128
Nov 25 09:00:16 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 25 09:00:16 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 25 09:00:16 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 25 09:00:16 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 25 09:00:16 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 25 09:00:16 localhost kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Nov 25 09:00:16 localhost kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Nov 25 09:00:16 localhost kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 25 09:00:16 localhost kernel: vgaarb: loaded
Nov 25 09:00:16 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 25 09:00:16 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 25 09:00:16 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 25 09:00:16 localhost kernel: pnp: PnP ACPI init
Nov 25 09:00:16 localhost kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Nov 25 09:00:16 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 25 09:00:16 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 25 09:00:16 localhost kernel: NET: Registered PF_INET protocol family
Nov 25 09:00:16 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 25 09:00:16 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 25 09:00:16 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 25 09:00:16 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 25 09:00:16 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 25 09:00:16 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 25 09:00:16 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 25 09:00:16 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 09:00:16 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 09:00:16 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 25 09:00:16 localhost kernel: NET: Registered PF_XDP protocol family
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Nov 25 09:00:16 localhost kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 09:00:16 localhost kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Nov 25 09:00:16 localhost kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Nov 25 09:00:16 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 25 09:00:16 localhost kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Nov 25 09:00:16 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 25 09:00:16 localhost kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Nov 25 09:00:16 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Nov 25 09:00:16 localhost kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 25 09:00:16 localhost kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Nov 25 09:00:16 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 25 09:00:16 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 25 09:00:16 localhost kernel: software IO TLB: mapped [mem 0x000000006b000000-0x000000006f000000] (64MB)
Nov 25 09:00:16 localhost kernel: ACPI: bus type thunderbolt registered
Nov 25 09:00:16 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 25 09:00:16 localhost kernel: Initialise system trusted keyrings
Nov 25 09:00:16 localhost kernel: Key type blacklist registered
Nov 25 09:00:16 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 25 09:00:16 localhost kernel: zbud: loaded
Nov 25 09:00:16 localhost kernel: integrity: Platform Keyring initialized
Nov 25 09:00:16 localhost kernel: integrity: Machine keyring initialized
Nov 25 09:00:16 localhost kernel: Freeing initrd memory: 75160K
Nov 25 09:00:16 localhost kernel: NET: Registered PF_ALG protocol family
Nov 25 09:00:16 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 25 09:00:16 localhost kernel: Key type asymmetric registered
Nov 25 09:00:16 localhost kernel: Asymmetric key parser 'x509' registered
Nov 25 09:00:16 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 25 09:00:16 localhost kernel: io scheduler mq-deadline registered
Nov 25 09:00:16 localhost kernel: io scheduler kyber registered
Nov 25 09:00:16 localhost kernel: io scheduler bfq registered
Nov 25 09:00:16 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Nov 25 09:00:16 localhost kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Nov 25 09:00:16 localhost kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Nov 25 09:00:16 localhost kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Nov 25 09:00:16 localhost kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Nov 25 09:00:16 localhost kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Nov 25 09:00:16 localhost kernel: shpchp 0000:01:00.0: Slot initialization failed
Nov 25 09:00:16 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 25 09:00:16 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 25 09:00:16 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 25 09:00:16 localhost kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Nov 25 09:00:16 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 25 09:00:16 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 25 09:00:16 localhost kernel: Non-volatile memory driver v1.3
Nov 25 09:00:16 localhost kernel: rdac: device handler registered
Nov 25 09:00:16 localhost kernel: hp_sw: device handler registered
Nov 25 09:00:16 localhost kernel: emc: device handler registered
Nov 25 09:00:16 localhost kernel: alua: device handler registered
Nov 25 09:00:16 localhost kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Nov 25 09:00:16 localhost kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Nov 25 09:00:16 localhost kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Nov 25 09:00:16 localhost kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Nov 25 09:00:16 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 25 09:00:16 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 25 09:00:16 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 25 09:00:16 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 25 09:00:16 localhost kernel: usb usb1: SerialNumber: 0000:02:01.0
Nov 25 09:00:16 localhost kernel: hub 1-0:1.0: USB hub found
Nov 25 09:00:16 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 25 09:00:16 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 25 09:00:16 localhost kernel: usbserial: USB Serial support registered for generic
Nov 25 09:00:16 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 25 09:00:16 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 25 09:00:16 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 25 09:00:16 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 25 09:00:16 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 25 09:00:16 localhost kernel: rtc_cmos 00:03: RTC can wake from S4
Nov 25 09:00:16 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 25 09:00:16 localhost kernel: rtc_cmos 00:03: registered as rtc0
Nov 25 09:00:16 localhost kernel: rtc_cmos 00:03: setting system clock to 2025-11-25T09:00:16 UTC (1764061216)
Nov 25 09:00:16 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 25 09:00:16 localhost kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Nov 25 09:00:16 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 25 09:00:16 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 25 09:00:16 localhost kernel: usbcore: registered new interface driver usbhid
Nov 25 09:00:16 localhost kernel: usbhid: USB HID core driver
Nov 25 09:00:16 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 25 09:00:16 localhost kernel: Initializing XFRM netlink socket
Nov 25 09:00:16 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 25 09:00:16 localhost kernel: Segment Routing with IPv6
Nov 25 09:00:16 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 25 09:00:16 localhost kernel: mpls_gso: MPLS GSO support
Nov 25 09:00:16 localhost kernel: IPI shorthand broadcast: enabled
Nov 25 09:00:16 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 25 09:00:16 localhost kernel: AES CTR mode by8 optimization enabled
Nov 25 09:00:16 localhost kernel: sched_clock: Marking stable (1116001732, 142573640)->(1367090574, -108515202)
Nov 25 09:00:16 localhost kernel: registered taskstats version 1
Nov 25 09:00:16 localhost kernel: Loading compiled-in X.509 certificates
Nov 25 09:00:16 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 09:00:16 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 25 09:00:16 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 25 09:00:16 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 25 09:00:16 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 25 09:00:16 localhost kernel: Demotion targets for Node 0: null
Nov 25 09:00:16 localhost kernel: page_owner is disabled
Nov 25 09:00:16 localhost kernel: Key type .fscrypt registered
Nov 25 09:00:16 localhost kernel: Key type fscrypt-provisioning registered
Nov 25 09:00:16 localhost kernel: Key type big_key registered
Nov 25 09:00:16 localhost kernel: Key type encrypted registered
Nov 25 09:00:16 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 25 09:00:16 localhost kernel: Loading compiled-in module X.509 certificates
Nov 25 09:00:16 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 09:00:16 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 25 09:00:16 localhost kernel: ima: No architecture policies found
Nov 25 09:00:16 localhost kernel: evm: Initialising EVM extended attributes:
Nov 25 09:00:16 localhost kernel: evm: security.selinux
Nov 25 09:00:16 localhost kernel: evm: security.SMACK64 (disabled)
Nov 25 09:00:16 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 25 09:00:16 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 25 09:00:16 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 25 09:00:16 localhost kernel: evm: security.apparmor (disabled)
Nov 25 09:00:16 localhost kernel: evm: security.ima
Nov 25 09:00:16 localhost kernel: evm: security.capability
Nov 25 09:00:16 localhost kernel: evm: HMAC attrs: 0x1
Nov 25 09:00:16 localhost kernel: Running certificate verification RSA selftest
Nov 25 09:00:16 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 25 09:00:16 localhost kernel: Running certificate verification ECDSA selftest
Nov 25 09:00:16 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 25 09:00:16 localhost kernel: clk: Disabling unused clocks
Nov 25 09:00:16 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 25 09:00:16 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 25 09:00:16 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 25 09:00:16 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 25 09:00:16 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 25 09:00:16 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 25 09:00:16 localhost kernel: Run /init as init process
Nov 25 09:00:16 localhost kernel:   with arguments:
Nov 25 09:00:16 localhost kernel:     /init
Nov 25 09:00:16 localhost kernel:   with environment:
Nov 25 09:00:16 localhost kernel:     HOME=/
Nov 25 09:00:16 localhost kernel:     TERM=linux
Nov 25 09:00:16 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Nov 25 09:00:16 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 09:00:16 localhost systemd[1]: Detected virtualization kvm.
Nov 25 09:00:16 localhost systemd[1]: Detected architecture x86-64.
Nov 25 09:00:16 localhost systemd[1]: Running in initrd.
Nov 25 09:00:16 localhost systemd[1]: No hostname configured, using default hostname.
Nov 25 09:00:16 localhost systemd[1]: Hostname set to <localhost>.
Nov 25 09:00:16 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 25 09:00:16 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 25 09:00:16 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 09:00:16 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 25 09:00:16 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 25 09:00:16 localhost systemd[1]: Reached target Local File Systems.
Nov 25 09:00:16 localhost systemd[1]: Reached target Path Units.
Nov 25 09:00:16 localhost systemd[1]: Reached target Slice Units.
Nov 25 09:00:16 localhost systemd[1]: Reached target Swaps.
Nov 25 09:00:16 localhost systemd[1]: Reached target Timer Units.
Nov 25 09:00:16 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 09:00:16 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 25 09:00:16 localhost systemd[1]: Listening on Journal Socket.
Nov 25 09:00:16 localhost systemd[1]: Listening on udev Control Socket.
Nov 25 09:00:16 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 25 09:00:16 localhost systemd[1]: Reached target Socket Units.
Nov 25 09:00:16 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 25 09:00:16 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 25 09:00:16 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 25 09:00:16 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 25 09:00:16 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Nov 25 09:00:16 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 25 09:00:16 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Nov 25 09:00:16 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 25 09:00:16 localhost systemd[1]: Starting Journal Service...
Nov 25 09:00:16 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 09:00:16 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 25 09:00:16 localhost systemd[1]: Starting Create System Users...
Nov 25 09:00:16 localhost systemd[1]: Starting Setup Virtual Console...
Nov 25 09:00:16 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 09:00:16 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 25 09:00:16 localhost systemd[1]: Finished Create System Users.
Nov 25 09:00:16 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 09:00:16 localhost systemd-journald[281]: Journal started
Nov 25 09:00:16 localhost systemd-journald[281]: Runtime Journal (/run/log/journal/3702d874fa3545b69e4f9523fd2bec51) is 8.0M, max 153.6M, 145.6M free.
Nov 25 09:00:16 localhost systemd-sysusers[284]: Creating group 'users' with GID 100.
Nov 25 09:00:16 localhost systemd-sysusers[284]: Creating group 'dbus' with GID 81.
Nov 25 09:00:16 localhost systemd-sysusers[284]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 25 09:00:16 localhost systemd[1]: Started Journal Service.
Nov 25 09:00:16 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 09:00:17 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 09:00:17 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 09:00:17 localhost systemd[1]: Finished Setup Virtual Console.
Nov 25 09:00:17 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 25 09:00:17 localhost systemd[1]: Starting dracut cmdline hook...
Nov 25 09:00:17 localhost dracut-cmdline[302]: dracut-9 dracut-057-102.git20250818.el9
Nov 25 09:00:17 localhost dracut-cmdline[302]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 09:00:17 localhost systemd[1]: Finished dracut cmdline hook.
Nov 25 09:00:17 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 25 09:00:17 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 25 09:00:17 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 25 09:00:17 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 25 09:00:17 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 25 09:00:17 localhost kernel: RPC: Registered udp transport module.
Nov 25 09:00:17 localhost kernel: RPC: Registered tcp transport module.
Nov 25 09:00:17 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 25 09:00:17 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 25 09:00:17 localhost rpc.statd[417]: Version 2.5.4 starting
Nov 25 09:00:17 localhost rpc.statd[417]: Initializing NSM state
Nov 25 09:00:17 localhost rpc.idmapd[422]: Setting log level to 0
Nov 25 09:00:17 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 25 09:00:17 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 09:00:17 localhost systemd-udevd[435]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 09:00:17 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 09:00:17 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 25 09:00:17 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 25 09:00:17 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 25 09:00:17 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 25 09:00:17 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 09:00:17 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 25 09:00:17 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 09:00:17 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 09:00:17 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 09:00:17 localhost systemd[1]: Reached target Network.
Nov 25 09:00:17 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 09:00:17 localhost systemd[1]: Starting dracut initqueue hook...
Nov 25 09:00:17 localhost kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Nov 25 09:00:17 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 25 09:00:17 localhost kernel:  vda: vda1
Nov 25 09:00:17 localhost systemd-udevd[438]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:00:17 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 09:00:17 localhost systemd[1]: Reached target Initrd Root Device.
Nov 25 09:00:17 localhost kernel: libata version 3.00 loaded.
Nov 25 09:00:17 localhost kernel: ahci 0000:00:1f.2: version 3.0
Nov 25 09:00:17 localhost kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Nov 25 09:00:17 localhost kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Nov 25 09:00:17 localhost kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Nov 25 09:00:17 localhost kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Nov 25 09:00:17 localhost kernel: scsi host0: ahci
Nov 25 09:00:17 localhost kernel: scsi host1: ahci
Nov 25 09:00:17 localhost kernel: scsi host2: ahci
Nov 25 09:00:17 localhost kernel: scsi host3: ahci
Nov 25 09:00:17 localhost kernel: scsi host4: ahci
Nov 25 09:00:17 localhost kernel: scsi host5: ahci
Nov 25 09:00:17 localhost kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 49 lpm-pol 0
Nov 25 09:00:17 localhost kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 49 lpm-pol 0
Nov 25 09:00:17 localhost kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 49 lpm-pol 0
Nov 25 09:00:17 localhost kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 49 lpm-pol 0
Nov 25 09:00:17 localhost kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 49 lpm-pol 0
Nov 25 09:00:17 localhost kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 49 lpm-pol 0
Nov 25 09:00:17 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 25 09:00:17 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 25 09:00:17 localhost systemd[1]: Reached target System Initialization.
Nov 25 09:00:17 localhost systemd[1]: Reached target Basic System.
Nov 25 09:00:17 localhost kernel: ata3: SATA link down (SStatus 0 SControl 300)
Nov 25 09:00:17 localhost kernel: ata4: SATA link down (SStatus 0 SControl 300)
Nov 25 09:00:17 localhost kernel: ata6: SATA link down (SStatus 0 SControl 300)
Nov 25 09:00:17 localhost kernel: ata5: SATA link down (SStatus 0 SControl 300)
Nov 25 09:00:17 localhost kernel: ata2: SATA link down (SStatus 0 SControl 300)
Nov 25 09:00:17 localhost kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Nov 25 09:00:17 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 25 09:00:17 localhost kernel: ata1.00: applying bridge limits
Nov 25 09:00:17 localhost kernel: ata1.00: configured for UDMA/100
Nov 25 09:00:17 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 25 09:00:17 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 25 09:00:17 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 25 09:00:17 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 25 09:00:17 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 25 09:00:18 localhost systemd[1]: Finished dracut initqueue hook.
Nov 25 09:00:18 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 09:00:18 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 25 09:00:18 localhost systemd[1]: Reached target Remote File Systems.
Nov 25 09:00:18 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 25 09:00:18 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 25 09:00:18 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 25 09:00:18 localhost systemd-fsck[530]: /usr/sbin/fsck.xfs: XFS file system.
Nov 25 09:00:18 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 09:00:18 localhost systemd[1]: Mounting /sysroot...
Nov 25 09:00:18 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 25 09:00:18 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 25 09:00:18 localhost kernel: XFS (vda1): Ending clean mount
Nov 25 09:00:18 localhost systemd[1]: Mounted /sysroot.
Nov 25 09:00:18 localhost systemd[1]: Reached target Initrd Root File System.
Nov 25 09:00:18 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 25 09:00:18 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 25 09:00:18 localhost systemd[1]: Reached target Initrd File Systems.
Nov 25 09:00:18 localhost systemd[1]: Reached target Initrd Default Target.
Nov 25 09:00:18 localhost systemd[1]: Starting dracut mount hook...
Nov 25 09:00:18 localhost systemd[1]: Finished dracut mount hook.
Nov 25 09:00:18 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 25 09:00:18 localhost rpc.idmapd[422]: exiting on signal 15
Nov 25 09:00:18 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 25 09:00:18 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 25 09:00:18 localhost systemd[1]: Stopped target Network.
Nov 25 09:00:18 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 25 09:00:18 localhost systemd[1]: Stopped target Timer Units.
Nov 25 09:00:18 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 25 09:00:18 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 25 09:00:18 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 25 09:00:18 localhost systemd[1]: Stopped target Basic System.
Nov 25 09:00:18 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 25 09:00:18 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 25 09:00:18 localhost systemd[1]: Stopped target Path Units.
Nov 25 09:00:18 localhost systemd[1]: Stopped target Remote File Systems.
Nov 25 09:00:18 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 25 09:00:18 localhost systemd[1]: Stopped target Slice Units.
Nov 25 09:00:18 localhost systemd[1]: Stopped target Socket Units.
Nov 25 09:00:18 localhost systemd[1]: Stopped target System Initialization.
Nov 25 09:00:18 localhost systemd[1]: Stopped target Local File Systems.
Nov 25 09:00:18 localhost systemd[1]: Stopped target Swaps.
Nov 25 09:00:18 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped dracut mount hook.
Nov 25 09:00:18 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 25 09:00:18 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 25 09:00:18 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 25 09:00:18 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 25 09:00:18 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 25 09:00:18 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 25 09:00:18 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 25 09:00:18 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 25 09:00:18 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 25 09:00:18 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 25 09:00:18 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 25 09:00:18 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 25 09:00:18 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Closed udev Control Socket.
Nov 25 09:00:18 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Closed udev Kernel Socket.
Nov 25 09:00:18 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 25 09:00:18 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 25 09:00:18 localhost systemd[1]: Starting Cleanup udev Database...
Nov 25 09:00:18 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 25 09:00:18 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 25 09:00:18 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Stopped Create System Users.
Nov 25 09:00:18 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 25 09:00:18 localhost systemd[1]: Finished Cleanup udev Database.
Nov 25 09:00:18 localhost systemd[1]: Reached target Switch Root.
Nov 25 09:00:18 localhost systemd[1]: Starting Switch Root...
Nov 25 09:00:18 localhost systemd[1]: Switching root.
Nov 25 09:00:18 localhost systemd-journald[281]: Journal stopped
Nov 25 09:00:19 localhost systemd-journald[281]: Received SIGTERM from PID 1 (systemd).
Nov 25 09:00:19 localhost kernel: audit: type=1404 audit(1764061218.798:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 25 09:00:19 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:00:19 localhost kernel: SELinux:  policy capability open_perms=1
Nov 25 09:00:19 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:00:19 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:00:19 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:00:19 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:00:19 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:00:19 localhost kernel: audit: type=1403 audit(1764061218.918:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 25 09:00:19 localhost systemd[1]: Successfully loaded SELinux policy in 125.281ms.
Nov 25 09:00:19 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.561ms.
Nov 25 09:00:19 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 09:00:19 localhost systemd[1]: Detected virtualization kvm.
Nov 25 09:00:19 localhost systemd[1]: Detected architecture x86-64.
Nov 25 09:00:19 localhost systemd-rc-local-generator[611]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:00:19 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 25 09:00:19 localhost systemd[1]: Stopped Switch Root.
Nov 25 09:00:19 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 25 09:00:19 localhost systemd[1]: Created slice Slice /system/getty.
Nov 25 09:00:19 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 25 09:00:19 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 25 09:00:19 localhost systemd[1]: Created slice User and Session Slice.
Nov 25 09:00:19 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 09:00:19 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 25 09:00:19 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 25 09:00:19 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 25 09:00:19 localhost systemd[1]: Stopped target Switch Root.
Nov 25 09:00:19 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 25 09:00:19 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 25 09:00:19 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 25 09:00:19 localhost systemd[1]: Reached target Path Units.
Nov 25 09:00:19 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 25 09:00:19 localhost systemd[1]: Reached target Slice Units.
Nov 25 09:00:19 localhost systemd[1]: Reached target Swaps.
Nov 25 09:00:19 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 25 09:00:19 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 25 09:00:19 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 25 09:00:19 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 25 09:00:19 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 25 09:00:19 localhost systemd[1]: Listening on udev Control Socket.
Nov 25 09:00:19 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 25 09:00:19 localhost systemd[1]: Mounting Huge Pages File System...
Nov 25 09:00:19 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 25 09:00:19 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 25 09:00:19 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 25 09:00:19 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 09:00:19 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 25 09:00:19 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 09:00:19 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 25 09:00:19 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 25 09:00:19 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 25 09:00:19 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 25 09:00:19 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 25 09:00:19 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 25 09:00:19 localhost systemd[1]: Stopped Journal Service.
Nov 25 09:00:19 localhost systemd[1]: Starting Journal Service...
Nov 25 09:00:19 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 09:00:19 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 25 09:00:19 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 09:00:19 localhost kernel: fuse: init (API version 7.37)
Nov 25 09:00:19 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 25 09:00:19 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 25 09:00:19 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 25 09:00:19 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 25 09:00:19 localhost systemd[1]: Mounted Huge Pages File System.
Nov 25 09:00:19 localhost systemd-journald[652]: Journal started
Nov 25 09:00:19 localhost systemd-journald[652]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 25 09:00:19 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 25 09:00:19 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 25 09:00:19 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 25 09:00:19 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:19 localhost systemd[1]: Started Journal Service.
Nov 25 09:00:19 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 25 09:00:19 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 25 09:00:19 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 09:00:19 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 09:00:19 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 09:00:19 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 25 09:00:19 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 25 09:00:19 localhost kernel: ACPI: bus type drm_connector registered
Nov 25 09:00:19 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 25 09:00:19 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 25 09:00:19 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 25 09:00:19 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 25 09:00:19 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 25 09:00:19 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 25 09:00:19 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 25 09:00:19 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 25 09:00:19 localhost systemd[1]: Mounting FUSE Control File System...
Nov 25 09:00:19 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 09:00:19 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 25 09:00:19 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 25 09:00:19 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 25 09:00:19 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 25 09:00:19 localhost systemd[1]: Starting Create System Users...
Nov 25 09:00:19 localhost systemd[1]: Mounted FUSE Control File System.
Nov 25 09:00:19 localhost systemd-journald[652]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 25 09:00:19 localhost systemd-journald[652]: Received client request to flush runtime journal.
Nov 25 09:00:19 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 25 09:00:19 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 25 09:00:19 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 09:00:19 localhost systemd[1]: Finished Create System Users.
Nov 25 09:00:19 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 09:00:19 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 25 09:00:19 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 09:00:19 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 25 09:00:19 localhost systemd[1]: Reached target Local File Systems.
Nov 25 09:00:19 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 25 09:00:19 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 25 09:00:19 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 25 09:00:19 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 25 09:00:19 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 25 09:00:19 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 25 09:00:19 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 09:00:19 localhost bootctl[669]: Couldn't find EFI system partition, skipping.
Nov 25 09:00:19 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 25 09:00:19 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 09:00:19 localhost systemd[1]: Starting Security Auditing Service...
Nov 25 09:00:19 localhost systemd[1]: Starting RPC Bind...
Nov 25 09:00:19 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 25 09:00:19 localhost auditd[675]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 25 09:00:19 localhost auditd[675]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 25 09:00:19 localhost systemd[1]: Started RPC Bind.
Nov 25 09:00:19 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 25 09:00:19 localhost augenrules[680]: /sbin/augenrules: No change
Nov 25 09:00:19 localhost augenrules[695]: No rules
Nov 25 09:00:19 localhost augenrules[695]: enabled 1
Nov 25 09:00:19 localhost augenrules[695]: failure 1
Nov 25 09:00:19 localhost augenrules[695]: pid 675
Nov 25 09:00:19 localhost augenrules[695]: rate_limit 0
Nov 25 09:00:19 localhost augenrules[695]: backlog_limit 8192
Nov 25 09:00:19 localhost augenrules[695]: lost 0
Nov 25 09:00:19 localhost augenrules[695]: backlog 0
Nov 25 09:00:19 localhost augenrules[695]: backlog_wait_time 60000
Nov 25 09:00:19 localhost augenrules[695]: backlog_wait_time_actual 0
Nov 25 09:00:19 localhost augenrules[695]: enabled 1
Nov 25 09:00:19 localhost augenrules[695]: failure 1
Nov 25 09:00:19 localhost augenrules[695]: pid 675
Nov 25 09:00:19 localhost augenrules[695]: rate_limit 0
Nov 25 09:00:19 localhost augenrules[695]: backlog_limit 8192
Nov 25 09:00:19 localhost augenrules[695]: lost 0
Nov 25 09:00:19 localhost augenrules[695]: backlog 0
Nov 25 09:00:19 localhost augenrules[695]: backlog_wait_time 60000
Nov 25 09:00:19 localhost augenrules[695]: backlog_wait_time_actual 0
Nov 25 09:00:19 localhost augenrules[695]: enabled 1
Nov 25 09:00:19 localhost augenrules[695]: failure 1
Nov 25 09:00:19 localhost augenrules[695]: pid 675
Nov 25 09:00:19 localhost augenrules[695]: rate_limit 0
Nov 25 09:00:19 localhost augenrules[695]: backlog_limit 8192
Nov 25 09:00:19 localhost augenrules[695]: lost 0
Nov 25 09:00:19 localhost augenrules[695]: backlog 0
Nov 25 09:00:19 localhost augenrules[695]: backlog_wait_time 60000
Nov 25 09:00:19 localhost augenrules[695]: backlog_wait_time_actual 0
Nov 25 09:00:19 localhost systemd[1]: Started Security Auditing Service.
Nov 25 09:00:19 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 25 09:00:19 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 25 09:00:19 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 25 09:00:19 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 25 09:00:19 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 09:00:19 localhost systemd[1]: Starting Update is Completed...
Nov 25 09:00:19 localhost systemd[1]: Finished Update is Completed.
Nov 25 09:00:19 localhost systemd-udevd[703]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 09:00:19 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 09:00:19 localhost systemd[1]: Reached target System Initialization.
Nov 25 09:00:19 localhost systemd[1]: Started dnf makecache --timer.
Nov 25 09:00:19 localhost systemd[1]: Started Daily rotation of log files.
Nov 25 09:00:19 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 25 09:00:19 localhost systemd[1]: Reached target Timer Units.
Nov 25 09:00:19 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 09:00:19 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 25 09:00:19 localhost systemd[1]: Reached target Socket Units.
Nov 25 09:00:19 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 25 09:00:19 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 09:00:19 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 09:00:19 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 09:00:19 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 09:00:19 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 25 09:00:19 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 25 09:00:19 localhost systemd-udevd[711]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:00:19 localhost systemd[1]: Reached target Basic System.
Nov 25 09:00:19 localhost dbus-broker-lau[726]: Ready
Nov 25 09:00:19 localhost systemd[1]: Starting NTP client/server...
Nov 25 09:00:19 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 25 09:00:19 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 25 09:00:20 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 25 09:00:20 localhost systemd[1]: Started irqbalance daemon.
Nov 25 09:00:20 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 25 09:00:20 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 09:00:20 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 09:00:20 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 09:00:20 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 25 09:00:20 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 25 09:00:20 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 25 09:00:20 localhost systemd[1]: Starting User Login Management...
Nov 25 09:00:20 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 25 09:00:20 localhost chronyd[754]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 09:00:20 localhost chronyd[754]: Loaded 0 symmetric keys
Nov 25 09:00:20 localhost chronyd[754]: Using right/UTC timezone to obtain leap second data
Nov 25 09:00:20 localhost chronyd[754]: Loaded seccomp filter (level 2)
Nov 25 09:00:20 localhost systemd[1]: Started NTP client/server.
Nov 25 09:00:20 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 25 09:00:20 localhost systemd-logind[746]: New seat seat0.
Nov 25 09:00:20 localhost systemd[1]: Started User Login Management.
Nov 25 09:00:20 localhost systemd-logind[746]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 09:00:20 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 25 09:00:20 localhost systemd-logind[746]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 09:00:20 localhost kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Nov 25 09:00:20 localhost kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Nov 25 09:00:20 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 25 09:00:20 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 25 09:00:20 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 25 09:00:20 localhost kernel: iTCO_vendor_support: vendor-support=0
Nov 25 09:00:20 localhost kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Nov 25 09:00:20 localhost kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Nov 25 09:00:20 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Nov 25 09:00:20 localhost kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Nov 25 09:00:20 localhost iptables.init[741]: iptables: Applying firewall rules: [  OK  ]
Nov 25 09:00:20 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 25 09:00:20 localhost kernel: Console: switching to colour dummy device 80x25
Nov 25 09:00:20 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 25 09:00:20 localhost kernel: [drm] features: -context_init
Nov 25 09:00:20 localhost kernel: [drm] number of scanouts: 1
Nov 25 09:00:20 localhost kernel: [drm] number of cap sets: 0
Nov 25 09:00:20 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Nov 25 09:00:20 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 25 09:00:20 localhost kernel: Console: switching to colour frame buffer device 160x50
Nov 25 09:00:20 localhost kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 25 09:00:20 localhost kernel: kvm_amd: TSC scaling supported
Nov 25 09:00:20 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 25 09:00:20 localhost kernel: kvm_amd: Nested Paging enabled
Nov 25 09:00:20 localhost kernel: kvm_amd: LBR virtualization supported
Nov 25 09:00:20 localhost kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Nov 25 09:00:20 localhost kernel: kvm_amd: Virtual GIF supported
Nov 25 09:00:20 localhost cloud-init[795]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 25 Nov 2025 09:00:20 +0000. Up 5.11 seconds.
Nov 25 09:00:20 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 25 09:00:20 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 25 09:00:20 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpuea21_45.mount: Deactivated successfully.
Nov 25 09:00:20 localhost systemd[1]: Starting Hostname Service...
Nov 25 09:00:20 localhost systemd[1]: Started Hostname Service.
Nov 25 09:00:20 np0005534695 systemd-hostnamed[809]: Hostname set to <np0005534695> (static)
Nov 25 09:00:20 np0005534695 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 25 09:00:20 np0005534695 systemd[1]: Reached target Preparation for Network.
Nov 25 09:00:20 np0005534695 systemd[1]: Starting Network Manager...
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9554] NetworkManager (version 1.54.1-1.el9) is starting... (boot:2d6bcb16-37ad-4149-af57-9c34e1d5b606)
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9557] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9647] manager[0x5593f8485080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9676] hostname: hostname: using hostnamed
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9677] hostname: static hostname changed from (none) to "np0005534695"
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9679] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9752] manager[0x5593f8485080]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9753] manager[0x5593f8485080]: rfkill: WWAN hardware radio set enabled
Nov 25 09:00:20 np0005534695 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9813] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9814] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9815] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9815] manager: Networking is enabled by state file
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9819] settings: Loaded settings plugin: keyfile (internal)
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9833] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9857] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9872] dhcp: init: Using DHCP client 'internal'
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9874] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9887] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9899] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9907] device (lo): Activation: starting connection 'lo' (5c7fd779-ef98-4a55-a168-3fc860cb7264)
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9916] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9920] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9942] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9948] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9951] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9953] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9956] device (eth0): carrier: link connected
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9962] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9966] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9971] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9976] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9977] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9981] manager: NetworkManager state is now CONNECTING
Nov 25 09:00:20 np0005534695 NetworkManager[813]: <info>  [1764061220.9983] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:00:20 np0005534695 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:00:21 np0005534695 NetworkManager[813]: <info>  [1764061221.0005] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:00:21 np0005534695 systemd[1]: Started Network Manager.
Nov 25 09:00:21 np0005534695 systemd[1]: Reached target Network.
Nov 25 09:00:21 np0005534695 NetworkManager[813]: <info>  [1764061221.0034] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:00:21 np0005534695 NetworkManager[813]: <info>  [1764061221.0040] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 25 09:00:21 np0005534695 systemd[1]: Starting Network Manager Wait Online...
Nov 25 09:00:21 np0005534695 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 25 09:00:21 np0005534695 NetworkManager[813]: <info>  [1764061221.0098] dhcp4 (eth0): state changed new lease, address=192.168.26.77
Nov 25 09:00:21 np0005534695 NetworkManager[813]: <info>  [1764061221.0104] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 09:00:21 np0005534695 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:00:21 np0005534695 NetworkManager[813]: <info>  [1764061221.0245] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 09:00:21 np0005534695 NetworkManager[813]: <info>  [1764061221.0249] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 09:00:21 np0005534695 NetworkManager[813]: <info>  [1764061221.0255] device (lo): Activation: successful, device activated.
Nov 25 09:00:21 np0005534695 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 25 09:00:21 np0005534695 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 09:00:21 np0005534695 systemd[1]: Reached target NFS client services.
Nov 25 09:00:21 np0005534695 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 09:00:21 np0005534695 systemd[1]: Reached target Remote File Systems.
Nov 25 09:00:21 np0005534695 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 09:00:22 np0005534695 NetworkManager[813]: <info>  [1764061222.8444] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:00:23 np0005534695 NetworkManager[813]: <info>  [1764061223.9320] dhcp6 (eth0): state changed new lease, address=2001:db8::2b6
Nov 25 09:00:25 np0005534695 NetworkManager[813]: <info>  [1764061225.0209] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:00:25 np0005534695 NetworkManager[813]: <info>  [1764061225.0247] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:00:25 np0005534695 NetworkManager[813]: <info>  [1764061225.0249] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:00:25 np0005534695 NetworkManager[813]: <info>  [1764061225.0253] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 09:00:25 np0005534695 NetworkManager[813]: <info>  [1764061225.0257] device (eth0): Activation: successful, device activated.
Nov 25 09:00:25 np0005534695 NetworkManager[813]: <info>  [1764061225.0262] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 09:00:25 np0005534695 NetworkManager[813]: <info>  [1764061225.0266] manager: startup complete
Nov 25 09:00:25 np0005534695 systemd[1]: Finished Network Manager Wait Online.
Nov 25 09:00:25 np0005534695 systemd[1]: Starting Cloud-init: Network Stage...
Nov 25 09:00:25 np0005534695 cloud-init[879]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 25 Nov 2025 09:00:25 +0000. Up 9.89 seconds.
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: |  eth0  | True |        192.168.26.77         | 255.255.255.0 | global | fa:16:3e:12:d1:dc |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: |  eth0  | True |      2001:db8::2b6/128       |       .       | global | fa:16:3e:12:d1:dc |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: |  eth0  | True | fe80::f816:3eff:fe12:d1dc/64 |       .       |  link  | fa:16:3e:12:d1:dc |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: |   0   |     0.0.0.0     | 192.168.26.1 |     0.0.0.0     |    eth0   |   UG  |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: |   1   | 169.254.169.254 | 192.168.26.2 | 255.255.255.255 |    eth0   |  UGH  |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: |   2   |   192.168.26.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: ++++++++++++++++++++++Route IPv6 info++++++++++++++++++++++
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: +-------+---------------+-------------+-----------+-------+
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: | Route |  Destination  |   Gateway   | Interface | Flags |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: +-------+---------------+-------------+-----------+-------+
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: |   1   |  2001:db8::1  |      ::     |    eth0   |   U   |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: |   2   | 2001:db8::2b6 |      ::     |    eth0   |   U   |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: |   3   |   fe80::/64   |      ::     |    eth0   |   U   |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: |   4   |      ::/0     | 2001:db8::1 |    eth0   |   UG  |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: |   6   |     local     |      ::     |    eth0   |   U   |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: |   7   |     local     |      ::     |    eth0   |   U   |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: |   8   |   multicast   |      ::     |    eth0   |   U   |
Nov 25 09:00:25 np0005534695 cloud-init[879]: ci-info: +-------+---------------+-------------+-----------+-------+
Nov 25 09:00:25 np0005534695 useradd[946]: new group: name=cloud-user, GID=1001
Nov 25 09:00:25 np0005534695 useradd[946]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 25 09:00:25 np0005534695 useradd[946]: add 'cloud-user' to group 'adm'
Nov 25 09:00:25 np0005534695 useradd[946]: add 'cloud-user' to group 'systemd-journal'
Nov 25 09:00:25 np0005534695 useradd[946]: add 'cloud-user' to shadow group 'adm'
Nov 25 09:00:25 np0005534695 useradd[946]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 25 09:00:25 np0005534695 chronyd[754]: Selected source 170.187.142.180 (2.centos.pool.ntp.org)
Nov 25 09:00:25 np0005534695 chronyd[754]: System clock TAI offset set to 37 seconds
Nov 25 09:00:26 np0005534695 cloud-init[879]: Generating public/private rsa key pair.
Nov 25 09:00:26 np0005534695 cloud-init[879]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 25 09:00:26 np0005534695 cloud-init[879]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 25 09:00:26 np0005534695 cloud-init[879]: The key fingerprint is:
Nov 25 09:00:26 np0005534695 cloud-init[879]: SHA256:jcP57wX9yqtdsOehRLOWzkpRwQiAZVZGIaWFl/TxS4Y root@np0005534695
Nov 25 09:00:26 np0005534695 cloud-init[879]: The key's randomart image is:
Nov 25 09:00:26 np0005534695 cloud-init[879]: +---[RSA 3072]----+
Nov 25 09:00:26 np0005534695 cloud-init[879]: |      o*BX+.o.   |
Nov 25 09:00:26 np0005534695 cloud-init[879]: |     .o.=o..+..  |
Nov 25 09:00:26 np0005534695 cloud-init[879]: |       ..  E =   |
Nov 25 09:00:26 np0005534695 cloud-init[879]: |       . +  +..  |
Nov 25 09:00:26 np0005534695 cloud-init[879]: |        S ...+o  |
Nov 25 09:00:26 np0005534695 cloud-init[879]: |         o  o.++ |
Nov 25 09:00:26 np0005534695 cloud-init[879]: |          .. =o.+|
Nov 25 09:00:26 np0005534695 cloud-init[879]: |          ..=+.=.|
Nov 25 09:00:26 np0005534695 cloud-init[879]: |           o===..|
Nov 25 09:00:26 np0005534695 cloud-init[879]: +----[SHA256]-----+
Nov 25 09:00:26 np0005534695 cloud-init[879]: Generating public/private ecdsa key pair.
Nov 25 09:00:26 np0005534695 cloud-init[879]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 25 09:00:26 np0005534695 cloud-init[879]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 25 09:00:26 np0005534695 cloud-init[879]: The key fingerprint is:
Nov 25 09:00:26 np0005534695 cloud-init[879]: SHA256:FFV6XoNLLdTZUzmPVLUTbvitF9EAVEjTOmAuy9lwhLo root@np0005534695
Nov 25 09:00:26 np0005534695 cloud-init[879]: The key's randomart image is:
Nov 25 09:00:26 np0005534695 cloud-init[879]: +---[ECDSA 256]---+
Nov 25 09:00:26 np0005534695 cloud-init[879]: |        .o.+B*o=B|
Nov 25 09:00:26 np0005534695 cloud-init[879]: |        ..+o.+B==|
Nov 25 09:00:26 np0005534695 cloud-init[879]: |       ..+..=++B=|
Nov 25 09:00:26 np0005534695 cloud-init[879]: |      ..o o+o++.=|
Nov 25 09:00:26 np0005534695 cloud-init[879]: |       oSB  o. o.|
Nov 25 09:00:26 np0005534695 cloud-init[879]: |      E + .    ..|
Nov 25 09:00:26 np0005534695 cloud-init[879]: |              . .|
Nov 25 09:00:26 np0005534695 cloud-init[879]: |               . |
Nov 25 09:00:26 np0005534695 cloud-init[879]: |                 |
Nov 25 09:00:26 np0005534695 cloud-init[879]: +----[SHA256]-----+
Nov 25 09:00:26 np0005534695 cloud-init[879]: Generating public/private ed25519 key pair.
Nov 25 09:00:26 np0005534695 cloud-init[879]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 25 09:00:26 np0005534695 cloud-init[879]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 25 09:00:26 np0005534695 cloud-init[879]: The key fingerprint is:
Nov 25 09:00:26 np0005534695 cloud-init[879]: SHA256:V+MAF7Buw3aMN0YgOpuH+mYvoY06p7R0m8iVs40etgU root@np0005534695
Nov 25 09:00:26 np0005534695 cloud-init[879]: The key's randomart image is:
Nov 25 09:00:26 np0005534695 cloud-init[879]: +--[ED25519 256]--+
Nov 25 09:00:26 np0005534695 cloud-init[879]: |      . +.o.     |
Nov 25 09:00:26 np0005534695 cloud-init[879]: |     . . =       |
Nov 25 09:00:26 np0005534695 cloud-init[879]: |    o   . o o    |
Nov 25 09:00:26 np0005534695 cloud-init[879]: |     = o + + .   |
Nov 25 09:00:26 np0005534695 cloud-init[879]: |   E+ . S B .    |
Nov 25 09:00:26 np0005534695 cloud-init[879]: |   .+. o * .     |
Nov 25 09:00:26 np0005534695 cloud-init[879]: | o.@ o           |
Nov 25 09:00:26 np0005534695 cloud-init[879]: |= X./            |
Nov 25 09:00:26 np0005534695 cloud-init[879]: |oO.X.+.          |
Nov 25 09:00:26 np0005534695 cloud-init[879]: +----[SHA256]-----+
Nov 25 09:00:26 np0005534695 systemd[1]: Finished Cloud-init: Network Stage.
Nov 25 09:00:26 np0005534695 systemd[1]: Reached target Cloud-config availability.
Nov 25 09:00:26 np0005534695 systemd[1]: Reached target Network is Online.
Nov 25 09:00:26 np0005534695 systemd[1]: Starting Cloud-init: Config Stage...
Nov 25 09:00:26 np0005534695 systemd[1]: Starting Crash recovery kernel arming...
Nov 25 09:00:26 np0005534695 systemd[1]: Starting Notify NFS peers of a restart...
Nov 25 09:00:26 np0005534695 sm-notify[962]: Version 2.5.4 starting
Nov 25 09:00:26 np0005534695 systemd[1]: Starting System Logging Service...
Nov 25 09:00:26 np0005534695 systemd[1]: Starting OpenSSH server daemon...
Nov 25 09:00:26 np0005534695 systemd[1]: Starting Permit User Sessions...
Nov 25 09:00:26 np0005534695 systemd[1]: Started Notify NFS peers of a restart.
Nov 25 09:00:26 np0005534695 sshd[964]: Server listening on 0.0.0.0 port 22.
Nov 25 09:00:26 np0005534695 sshd[964]: Server listening on :: port 22.
Nov 25 09:00:26 np0005534695 systemd[1]: Started OpenSSH server daemon.
Nov 25 09:00:26 np0005534695 systemd[1]: Finished Permit User Sessions.
Nov 25 09:00:26 np0005534695 systemd[1]: Started Command Scheduler.
Nov 25 09:00:26 np0005534695 systemd[1]: Started Getty on tty1.
Nov 25 09:00:26 np0005534695 crond[970]: (CRON) STARTUP (1.5.7)
Nov 25 09:00:26 np0005534695 crond[970]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 25 09:00:26 np0005534695 crond[970]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 38% if used.)
Nov 25 09:00:26 np0005534695 crond[970]: (CRON) INFO (running with inotify support)
Nov 25 09:00:26 np0005534695 sshd-session[967]: Connection closed by 192.168.26.11 port 48864 [preauth]
Nov 25 09:00:26 np0005534695 systemd[1]: Started Serial Getty on ttyS0.
Nov 25 09:00:26 np0005534695 systemd[1]: Reached target Login Prompts.
Nov 25 09:00:26 np0005534695 rsyslogd[963]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="963" x-info="https://www.rsyslog.com"] start
Nov 25 09:00:26 np0005534695 rsyslogd[963]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 25 09:00:26 np0005534695 sshd-session[973]: Unable to negotiate with 192.168.26.11 port 48866: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 25 09:00:26 np0005534695 systemd[1]: Started System Logging Service.
Nov 25 09:00:26 np0005534695 systemd[1]: Reached target Multi-User System.
Nov 25 09:00:26 np0005534695 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 25 09:00:26 np0005534695 sshd-session[984]: Connection reset by 192.168.26.11 port 48876 [preauth]
Nov 25 09:00:26 np0005534695 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 25 09:00:26 np0005534695 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 25 09:00:26 np0005534695 sshd-session[991]: Unable to negotiate with 192.168.26.11 port 48892: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 25 09:00:26 np0005534695 sshd-session[994]: Unable to negotiate with 192.168.26.11 port 48900: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 25 09:00:26 np0005534695 rsyslogd[963]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:00:26 np0005534695 sshd-session[1034]: Unable to negotiate with 192.168.26.11 port 48928: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 25 09:00:26 np0005534695 sshd-session[1037]: Unable to negotiate with 192.168.26.11 port 48944: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 25 09:00:26 np0005534695 kdumpctl[974]: kdump: No kdump initial ramdisk found.
Nov 25 09:00:26 np0005534695 kdumpctl[974]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 25 09:00:26 np0005534695 sshd-session[1000]: Connection closed by 192.168.26.11 port 48904 [preauth]
Nov 25 09:00:26 np0005534695 cloud-init[1084]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 25 Nov 2025 09:00:26 +0000. Up 11.24 seconds.
Nov 25 09:00:26 np0005534695 sshd-session[1007]: Connection closed by 192.168.26.11 port 48912 [preauth]
Nov 25 09:00:26 np0005534695 systemd[1]: Finished Cloud-init: Config Stage.
Nov 25 09:00:26 np0005534695 systemd[1]: Starting Cloud-init: Final Stage...
Nov 25 09:00:26 np0005534695 dracut[1243]: dracut-057-102.git20250818.el9
Nov 25 09:00:27 np0005534695 cloud-init[1259]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 25 Nov 2025 09:00:27 +0000. Up 11.59 seconds.
Nov 25 09:00:27 np0005534695 cloud-init[1261]: #############################################################
Nov 25 09:00:27 np0005534695 cloud-init[1262]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 25 09:00:27 np0005534695 cloud-init[1264]: 256 SHA256:FFV6XoNLLdTZUzmPVLUTbvitF9EAVEjTOmAuy9lwhLo root@np0005534695 (ECDSA)
Nov 25 09:00:27 np0005534695 cloud-init[1266]: 256 SHA256:V+MAF7Buw3aMN0YgOpuH+mYvoY06p7R0m8iVs40etgU root@np0005534695 (ED25519)
Nov 25 09:00:27 np0005534695 cloud-init[1268]: 3072 SHA256:jcP57wX9yqtdsOehRLOWzkpRwQiAZVZGIaWFl/TxS4Y root@np0005534695 (RSA)
Nov 25 09:00:27 np0005534695 cloud-init[1269]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 25 09:00:27 np0005534695 cloud-init[1270]: #############################################################
Nov 25 09:00:27 np0005534695 cloud-init[1259]: Cloud-init v. 24.4-7.el9 finished at Tue, 25 Nov 2025 09:00:27 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.73 seconds
Nov 25 09:00:27 np0005534695 dracut[1245]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 25 09:00:27 np0005534695 systemd[1]: Finished Cloud-init: Final Stage.
Nov 25 09:00:27 np0005534695 systemd[1]: Reached target Cloud-init target.
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 25 09:00:27 np0005534695 dracut[1245]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 25 09:00:27 np0005534695 dracut[1245]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 25 09:00:27 np0005534695 dracut[1245]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 25 09:00:27 np0005534695 dracut[1245]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: memstrack is not available
Nov 25 09:00:28 np0005534695 dracut[1245]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 09:00:28 np0005534695 dracut[1245]: memstrack is not available
Nov 25 09:00:28 np0005534695 dracut[1245]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 09:00:28 np0005534695 dracut[1245]: *** Including module: systemd ***
Nov 25 09:00:28 np0005534695 dracut[1245]: *** Including module: fips ***
Nov 25 09:00:28 np0005534695 dracut[1245]: *** Including module: systemd-initrd ***
Nov 25 09:00:28 np0005534695 dracut[1245]: *** Including module: i18n ***
Nov 25 09:00:28 np0005534695 dracut[1245]: *** Including module: drm ***
Nov 25 09:00:29 np0005534695 dracut[1245]: *** Including module: prefixdevname ***
Nov 25 09:00:29 np0005534695 dracut[1245]: *** Including module: kernel-modules ***
Nov 25 09:00:29 np0005534695 kernel: block vda: the capability attribute has been deprecated.
Nov 25 09:00:29 np0005534695 dracut[1245]: *** Including module: kernel-modules-extra ***
Nov 25 09:00:29 np0005534695 dracut[1245]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 25 09:00:29 np0005534695 dracut[1245]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 25 09:00:29 np0005534695 dracut[1245]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 25 09:00:29 np0005534695 dracut[1245]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 25 09:00:29 np0005534695 dracut[1245]: *** Including module: qemu ***
Nov 25 09:00:29 np0005534695 dracut[1245]: *** Including module: fstab-sys ***
Nov 25 09:00:29 np0005534695 dracut[1245]: *** Including module: rootfs-block ***
Nov 25 09:00:29 np0005534695 dracut[1245]: *** Including module: terminfo ***
Nov 25 09:00:29 np0005534695 dracut[1245]: *** Including module: udev-rules ***
Nov 25 09:00:30 np0005534695 dracut[1245]: Skipping udev rule: 91-permissions.rules
Nov 25 09:00:30 np0005534695 dracut[1245]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 25 09:00:30 np0005534695 dracut[1245]: *** Including module: virtiofs ***
Nov 25 09:00:30 np0005534695 dracut[1245]: *** Including module: dracut-systemd ***
Nov 25 09:00:30 np0005534695 irqbalance[745]: Cannot change IRQ 45 affinity: Operation not permitted
Nov 25 09:00:30 np0005534695 irqbalance[745]: IRQ 45 affinity is now unmanaged
Nov 25 09:00:30 np0005534695 irqbalance[745]: Cannot change IRQ 44 affinity: Operation not permitted
Nov 25 09:00:30 np0005534695 irqbalance[745]: IRQ 44 affinity is now unmanaged
Nov 25 09:00:30 np0005534695 irqbalance[745]: Cannot change IRQ 42 affinity: Operation not permitted
Nov 25 09:00:30 np0005534695 irqbalance[745]: IRQ 42 affinity is now unmanaged
Nov 25 09:00:30 np0005534695 dracut[1245]: *** Including module: usrmount ***
Nov 25 09:00:30 np0005534695 dracut[1245]: *** Including module: base ***
Nov 25 09:00:30 np0005534695 dracut[1245]: *** Including module: fs-lib ***
Nov 25 09:00:30 np0005534695 dracut[1245]: *** Including module: kdumpbase ***
Nov 25 09:00:30 np0005534695 dracut[1245]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 25 09:00:30 np0005534695 dracut[1245]:   microcode_ctl module: mangling fw_dir
Nov 25 09:00:30 np0005534695 dracut[1245]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 25 09:00:30 np0005534695 dracut[1245]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 25 09:00:30 np0005534695 dracut[1245]:     microcode_ctl: configuration "intel" is ignored
Nov 25 09:00:30 np0005534695 dracut[1245]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 25 09:00:30 np0005534695 dracut[1245]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 25 09:00:30 np0005534695 dracut[1245]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 25 09:00:30 np0005534695 dracut[1245]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 25 09:00:30 np0005534695 dracut[1245]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 25 09:00:30 np0005534695 dracut[1245]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 25 09:00:30 np0005534695 dracut[1245]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 25 09:00:31 np0005534695 dracut[1245]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 25 09:00:31 np0005534695 dracut[1245]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 25 09:00:31 np0005534695 dracut[1245]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 25 09:00:31 np0005534695 dracut[1245]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 25 09:00:31 np0005534695 dracut[1245]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 25 09:00:31 np0005534695 dracut[1245]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 25 09:00:31 np0005534695 dracut[1245]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 25 09:00:31 np0005534695 dracut[1245]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 25 09:00:31 np0005534695 dracut[1245]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 25 09:00:31 np0005534695 dracut[1245]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 25 09:00:31 np0005534695 dracut[1245]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 25 09:00:31 np0005534695 dracut[1245]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 25 09:00:31 np0005534695 dracut[1245]: *** Including module: openssl ***
Nov 25 09:00:31 np0005534695 dracut[1245]: *** Including module: shutdown ***
Nov 25 09:00:31 np0005534695 dracut[1245]: *** Including module: squash ***
Nov 25 09:00:31 np0005534695 dracut[1245]: *** Including modules done ***
Nov 25 09:00:31 np0005534695 dracut[1245]: *** Installing kernel module dependencies ***
Nov 25 09:00:32 np0005534695 dracut[1245]: *** Installing kernel module dependencies done ***
Nov 25 09:00:32 np0005534695 dracut[1245]: *** Resolving executable dependencies ***
Nov 25 09:00:33 np0005534695 dracut[1245]: *** Resolving executable dependencies done ***
Nov 25 09:00:33 np0005534695 dracut[1245]: *** Generating early-microcode cpio image ***
Nov 25 09:00:33 np0005534695 dracut[1245]: *** Store current command line parameters ***
Nov 25 09:00:33 np0005534695 dracut[1245]: Stored kernel commandline:
Nov 25 09:00:33 np0005534695 dracut[1245]: No dracut internal kernel commandline stored in the initramfs
Nov 25 09:00:33 np0005534695 dracut[1245]: *** Install squash loader ***
Nov 25 09:00:33 np0005534695 dracut[1245]: *** Squashing the files inside the initramfs ***
Nov 25 09:00:35 np0005534695 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:00:35 np0005534695 dracut[1245]: *** Squashing the files inside the initramfs done ***
Nov 25 09:00:35 np0005534695 dracut[1245]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 25 09:00:35 np0005534695 dracut[1245]: *** Hardlinking files ***
Nov 25 09:00:35 np0005534695 dracut[1245]: Mode:           real
Nov 25 09:00:35 np0005534695 dracut[1245]: Files:          50
Nov 25 09:00:35 np0005534695 dracut[1245]: Linked:         0 files
Nov 25 09:00:35 np0005534695 dracut[1245]: Compared:       0 xattrs
Nov 25 09:00:35 np0005534695 dracut[1245]: Compared:       0 files
Nov 25 09:00:35 np0005534695 dracut[1245]: Saved:          0 B
Nov 25 09:00:35 np0005534695 dracut[1245]: Duration:       0.000490 seconds
Nov 25 09:00:35 np0005534695 dracut[1245]: *** Hardlinking files done ***
Nov 25 09:00:35 np0005534695 dracut[1245]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 25 09:00:35 np0005534695 kdumpctl[974]: kdump: kexec: loaded kdump kernel
Nov 25 09:00:35 np0005534695 kdumpctl[974]: kdump: Starting kdump: [OK]
Nov 25 09:00:36 np0005534695 systemd[1]: Finished Crash recovery kernel arming.
Nov 25 09:00:36 np0005534695 systemd[1]: Startup finished in 1.352s (kernel) + 2.029s (initrd) + 17.211s (userspace) = 20.593s.
Nov 25 09:00:50 np0005534695 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 09:00:52 np0005534695 sshd-session[4370]: Accepted publickey for zuul from 192.168.26.12 port 40978 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 25 09:00:52 np0005534695 systemd[1]: Created slice User Slice of UID 1000.
Nov 25 09:00:52 np0005534695 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 25 09:00:52 np0005534695 systemd-logind[746]: New session 1 of user zuul.
Nov 25 09:00:52 np0005534695 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 25 09:00:52 np0005534695 systemd[1]: Starting User Manager for UID 1000...
Nov 25 09:00:52 np0005534695 systemd[4374]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:00:52 np0005534695 systemd[4374]: Queued start job for default target Main User Target.
Nov 25 09:00:52 np0005534695 systemd[4374]: Created slice User Application Slice.
Nov 25 09:00:52 np0005534695 systemd[4374]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 09:00:52 np0005534695 systemd[4374]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 09:00:52 np0005534695 systemd[4374]: Reached target Paths.
Nov 25 09:00:52 np0005534695 systemd[4374]: Reached target Timers.
Nov 25 09:00:52 np0005534695 systemd[4374]: Starting D-Bus User Message Bus Socket...
Nov 25 09:00:52 np0005534695 systemd[4374]: Starting Create User's Volatile Files and Directories...
Nov 25 09:00:52 np0005534695 systemd[4374]: Finished Create User's Volatile Files and Directories.
Nov 25 09:00:52 np0005534695 systemd[4374]: Listening on D-Bus User Message Bus Socket.
Nov 25 09:00:52 np0005534695 systemd[4374]: Reached target Sockets.
Nov 25 09:00:52 np0005534695 systemd[4374]: Reached target Basic System.
Nov 25 09:00:52 np0005534695 systemd[4374]: Reached target Main User Target.
Nov 25 09:00:52 np0005534695 systemd[4374]: Startup finished in 84ms.
Nov 25 09:00:52 np0005534695 systemd[1]: Started User Manager for UID 1000.
Nov 25 09:00:52 np0005534695 systemd[1]: Started Session 1 of User zuul.
Nov 25 09:00:52 np0005534695 sshd-session[4370]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:00:53 np0005534695 python3[4456]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:00:55 np0005534695 python3[4484]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:01:00 np0005534695 python3[4538]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:01:01 np0005534695 CROND[4556]: (root) CMD (run-parts /etc/cron.hourly)
Nov 25 09:01:01 np0005534695 run-parts[4559]: (/etc/cron.hourly) starting 0anacron
Nov 25 09:01:01 np0005534695 anacron[4572]: Anacron started on 2025-11-25
Nov 25 09:01:01 np0005534695 anacron[4572]: Will run job `cron.daily' in 15 min.
Nov 25 09:01:01 np0005534695 anacron[4572]: Will run job `cron.weekly' in 35 min.
Nov 25 09:01:01 np0005534695 anacron[4572]: Will run job `cron.monthly' in 55 min.
Nov 25 09:01:01 np0005534695 anacron[4572]: Jobs will be executed sequentially
Nov 25 09:01:01 np0005534695 run-parts[4577]: (/etc/cron.hourly) finished 0anacron
Nov 25 09:01:01 np0005534695 CROND[4555]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 25 09:01:01 np0005534695 python3[4593]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 25 09:01:03 np0005534695 python3[4619]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+gNTjTZDQgtUOszUcfFNwRDhaF3fpuKv4WnYmO9LCSnBOvxKg32kLsWN4UIUhuvnqQCzM+/poM7RT3r9cQ1IsDccOYvVT/Wtp5oKX+m81fz8DhCMYa72X9A2pIXwxQsBgRDPh3oTqqaSR8H+rObzkL49NEB7PB37PSqa7bTT+RtyPa94m/b+vmwdC/CwfC0YTEjQEMXEM2Mx4n7pVA/kVzra/ScNFDdQaJmKWoA28J/ubqkvnvrg0+Z4ywfQ/0sBAXWNOR6LvQ2x4Rqd3uiHgobysScVRo2/+J5NDB1wN+flg8+oxSlhauY+97xKn03faiQ5y1cEiMT5A0Bhn89bTx0VUxzmNXXtQVA9xv3gSfMyOpzGaqf9n4N8yedXl6TXe+ascB5uWelrP6b2aqonb4EtqM7AZYKSLWXDwn7czhaMjUge52BUOKmb0asJdlTXpqZdVVMPfBnYGKIE8DNcp99rTtP5JwVDYKitUQAB45plvpUUYoKYI9h79SFYkhws= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:03 np0005534695 python3[4643]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:03 np0005534695 python3[4742]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:01:04 np0005534695 python3[4813]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764061263.5627446-252-139101440177295/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=46be1ba69aef4b9caa3787efccecaa0c_id_rsa follow=False checksum=ab873ac71b169d81ba60edcb9a3df54902eb3861 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:04 np0005534695 python3[4936]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:01:04 np0005534695 python3[5007]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764061264.2024345-307-248920839243583/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=46be1ba69aef4b9caa3787efccecaa0c_id_rsa.pub follow=False checksum=bf193b190ac8dfe414ab48ea4e2bf3db22ed6209 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:05 np0005534695 python3[5055]: ansible-ping Invoked with data=pong
Nov 25 09:01:06 np0005534695 python3[5079]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:01:07 np0005534695 python3[5133]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 25 09:01:08 np0005534695 python3[5165]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:09 np0005534695 python3[5189]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:09 np0005534695 python3[5213]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:09 np0005534695 python3[5237]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:09 np0005534695 python3[5261]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:09 np0005534695 python3[5285]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:11 np0005534695 sudo[5309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntxykodlcawvvkevtzthlnzexobadswo ; /usr/bin/python3'
Nov 25 09:01:11 np0005534695 sudo[5309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:11 np0005534695 python3[5311]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:11 np0005534695 sudo[5309]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:11 np0005534695 sudo[5387]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndnjsuqcfkmdguhczzgqgqvbsfxpfyum ; /usr/bin/python3'
Nov 25 09:01:11 np0005534695 sudo[5387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:11 np0005534695 python3[5389]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:01:11 np0005534695 sudo[5387]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:11 np0005534695 sudo[5460]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpolyhzkcjkwekopnwxdmudmupxsmkub ; /usr/bin/python3'
Nov 25 09:01:11 np0005534695 sudo[5460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:11 np0005534695 python3[5462]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764061271.2602472-32-128753203720798/source follow=False _original_basename=mirror_info.sh.j2 checksum=3f92644b791816833989d215b9a84c589a7b8ebd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:11 np0005534695 sudo[5460]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:12 np0005534695 python3[5510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:12 np0005534695 python3[5534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:12 np0005534695 python3[5558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:13 np0005534695 python3[5582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:13 np0005534695 python3[5606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:13 np0005534695 python3[5630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:13 np0005534695 python3[5654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:13 np0005534695 python3[5678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:14 np0005534695 python3[5702]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:14 np0005534695 python3[5726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:14 np0005534695 python3[5750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:14 np0005534695 python3[5774]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:14 np0005534695 python3[5798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:15 np0005534695 python3[5822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:15 np0005534695 python3[5846]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:15 np0005534695 python3[5870]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:15 np0005534695 python3[5894]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:16 np0005534695 python3[5918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:16 np0005534695 python3[5942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:16 np0005534695 python3[5966]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:16 np0005534695 python3[5990]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:16 np0005534695 python3[6014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:17 np0005534695 python3[6038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:17 np0005534695 python3[6062]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:17 np0005534695 python3[6086]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:17 np0005534695 python3[6110]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:19 np0005534695 sudo[6134]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxmokbamhggwiexehugqblehapujvffm ; /usr/bin/python3'
Nov 25 09:01:19 np0005534695 sudo[6134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:19 np0005534695 python3[6136]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 09:01:19 np0005534695 systemd[1]: Starting Time & Date Service...
Nov 25 09:01:19 np0005534695 systemd[1]: Started Time & Date Service.
Nov 25 09:01:19 np0005534695 systemd-timedated[6138]: Changed time zone to 'UTC' (UTC).
Nov 25 09:01:19 np0005534695 sudo[6134]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:19 np0005534695 sudo[6165]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzdjnekvpiqnclnaadjupeffnxcbctwg ; /usr/bin/python3'
Nov 25 09:01:19 np0005534695 sudo[6165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:20 np0005534695 python3[6167]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:20 np0005534695 sudo[6165]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:20 np0005534695 python3[6243]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:01:20 np0005534695 python3[6314]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764061280.1768668-253-117981625757356/source _original_basename=tmpqpqlvg3t follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:20 np0005534695 python3[6414]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:01:21 np0005534695 python3[6485]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764061280.7553542-303-111017032243346/source _original_basename=tmpa9pvhjgd follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:21 np0005534695 sudo[6585]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsitfzeycslymogxvbcxiemsxrynhyin ; /usr/bin/python3'
Nov 25 09:01:21 np0005534695 sudo[6585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:21 np0005534695 python3[6587]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:01:21 np0005534695 sudo[6585]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:21 np0005534695 sudo[6658]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpetpcdcrsuijgozkflixmnrcynotlvs ; /usr/bin/python3'
Nov 25 09:01:21 np0005534695 sudo[6658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:22 np0005534695 python3[6660]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764061281.6205745-382-223151800797308/source _original_basename=tmp5x3f63w_ follow=False checksum=559f1cb2ab360851cdb2472955da2ae098969dfb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:22 np0005534695 sudo[6658]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:22 np0005534695 python3[6708]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:01:22 np0005534695 python3[6734]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:01:22 np0005534695 sudo[6812]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esdephbsgdftecphzpojzbukstnsansa ; /usr/bin/python3'
Nov 25 09:01:22 np0005534695 sudo[6812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:22 np0005534695 python3[6814]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:01:22 np0005534695 sudo[6812]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:23 np0005534695 sudo[6885]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moqhrerzflcshdlpltkscbahayuhvnio ; /usr/bin/python3'
Nov 25 09:01:23 np0005534695 sudo[6885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:23 np0005534695 python3[6887]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764061282.73511-453-45010028536427/source _original_basename=tmpxdqt6d1s follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:23 np0005534695 sudo[6885]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:23 np0005534695 sudo[6936]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxluszcvsjkjzcbzohdnubtisbzbapkk ; /usr/bin/python3'
Nov 25 09:01:23 np0005534695 sudo[6936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:23 np0005534695 python3[6938]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e08-49e2-bfa5-76fa-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:01:23 np0005534695 sudo[6936]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:24 np0005534695 python3[6966]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                             _uses_shell=True zuul_log_id=fa163e08-49e2-bfa5-76fa-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 25 09:01:25 np0005534695 python3[6994]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:31 np0005534695 chronyd[754]: Selected source 204.197.163.71 (2.centos.pool.ntp.org)
Nov 25 09:01:41 np0005534695 sudo[7018]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivvnhorbeiqzomdzypnsbukkexdkeyrh ; /usr/bin/python3'
Nov 25 09:01:41 np0005534695 sudo[7018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:41 np0005534695 python3[7020]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:41 np0005534695 sudo[7018]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:49 np0005534695 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 09:02:27 np0005534695 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Nov 25 09:02:27 np0005534695 kernel: pci 0000:07:00.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 25 09:02:27 np0005534695 kernel: pci 0000:07:00.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 25 09:02:27 np0005534695 kernel: pci 0000:07:00.0: ROM [mem 0x00000000-0x0003ffff pref]
Nov 25 09:02:27 np0005534695 kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]: assigned
Nov 25 09:02:27 np0005534695 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]: assigned
Nov 25 09:02:27 np0005534695 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]: assigned
Nov 25 09:02:28 np0005534695 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002)
Nov 25 09:02:28 np0005534695 NetworkManager[813]: <info>  [1764061348.0286] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 09:02:28 np0005534695 systemd-udevd[7023]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:02:28 np0005534695 NetworkManager[813]: <info>  [1764061348.0406] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:02:28 np0005534695 NetworkManager[813]: <info>  [1764061348.0422] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 25 09:02:28 np0005534695 NetworkManager[813]: <info>  [1764061348.0425] device (eth1): carrier: link connected
Nov 25 09:02:28 np0005534695 NetworkManager[813]: <info>  [1764061348.0426] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 09:02:28 np0005534695 NetworkManager[813]: <info>  [1764061348.0430] policy: auto-activating connection 'Wired connection 1' (9932a87b-87bf-3422-b00e-c134c1ad07fd)
Nov 25 09:02:28 np0005534695 NetworkManager[813]: <info>  [1764061348.0432] device (eth1): Activation: starting connection 'Wired connection 1' (9932a87b-87bf-3422-b00e-c134c1ad07fd)
Nov 25 09:02:28 np0005534695 NetworkManager[813]: <info>  [1764061348.0433] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:02:28 np0005534695 NetworkManager[813]: <info>  [1764061348.0434] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:02:28 np0005534695 NetworkManager[813]: <info>  [1764061348.0437] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:02:28 np0005534695 NetworkManager[813]: <info>  [1764061348.0440] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:02:28 np0005534695 python3[7050]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e08-49e2-c32a-ccd7-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:02:38 np0005534695 sudo[7128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxjcufzzdvvbpsztyzuekpnlqmvtfdlr ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 25 09:02:38 np0005534695 sudo[7128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:02:38 np0005534695 python3[7130]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:02:38 np0005534695 sudo[7128]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:38 np0005534695 sudo[7201]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhpcleuwfjimorptiyejpevcrnsfelis ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 25 09:02:38 np0005534695 sudo[7201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:02:38 np0005534695 python3[7203]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764061358.4367235-161-130748797513880/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=872a5403a26d901f323042a7c91e65eb896cd84e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:02:38 np0005534695 sudo[7201]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:39 np0005534695 sudo[7251]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qddxxbtklvdmmfuxbfnideiqmwycrlxy ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 25 09:02:39 np0005534695 sudo[7251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:02:39 np0005534695 python3[7253]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:02:39 np0005534695 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 09:02:39 np0005534695 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 09:02:39 np0005534695 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 09:02:39 np0005534695 NetworkManager[813]: <info>  [1764061359.2706] caught SIGTERM, shutting down normally.
Nov 25 09:02:39 np0005534695 systemd[1]: Stopping Network Manager...
Nov 25 09:02:39 np0005534695 NetworkManager[813]: <info>  [1764061359.2711] dhcp4 (eth0): canceled DHCP transaction
Nov 25 09:02:39 np0005534695 NetworkManager[813]: <info>  [1764061359.2711] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:02:39 np0005534695 NetworkManager[813]: <info>  [1764061359.2712] dhcp4 (eth0): state changed no lease
Nov 25 09:02:39 np0005534695 NetworkManager[813]: <info>  [1764061359.2713] dhcp6 (eth0): canceled DHCP transaction
Nov 25 09:02:39 np0005534695 NetworkManager[813]: <info>  [1764061359.2713] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:02:39 np0005534695 NetworkManager[813]: <info>  [1764061359.2713] dhcp6 (eth0): state changed no lease
Nov 25 09:02:39 np0005534695 NetworkManager[813]: <info>  [1764061359.2714] manager: NetworkManager state is now CONNECTING
Nov 25 09:02:39 np0005534695 NetworkManager[813]: <info>  [1764061359.2775] dhcp4 (eth1): canceled DHCP transaction
Nov 25 09:02:39 np0005534695 NetworkManager[813]: <info>  [1764061359.2775] dhcp4 (eth1): state changed no lease
Nov 25 09:02:39 np0005534695 NetworkManager[813]: <info>  [1764061359.2793] exiting (success)
Nov 25 09:02:39 np0005534695 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:02:39 np0005534695 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:02:39 np0005534695 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 09:02:39 np0005534695 systemd[1]: Stopped Network Manager.
Nov 25 09:02:39 np0005534695 systemd[1]: Starting Network Manager...
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3158] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:2d6bcb16-37ad-4149-af57-9c34e1d5b606)
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3159] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3197] manager[0x55a85d19f090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 09:02:39 np0005534695 systemd[1]: Starting Hostname Service...
Nov 25 09:02:39 np0005534695 systemd[1]: Started Hostname Service.
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3752] hostname: hostname: using hostnamed
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3753] hostname: static hostname changed from (none) to "np0005534695"
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3755] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3758] manager[0x55a85d19f090]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3758] manager[0x55a85d19f090]: rfkill: WWAN hardware radio set enabled
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3775] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3775] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3776] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3776] manager: Networking is enabled by state file
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3777] settings: Loaded settings plugin: keyfile (internal)
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3780] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3796] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3802] dhcp: init: Using DHCP client 'internal'
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3804] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3806] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3810] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3815] device (lo): Activation: starting connection 'lo' (5c7fd779-ef98-4a55-a168-3fc860cb7264)
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3819] device (eth0): carrier: link connected
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3822] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3825] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3825] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3829] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3833] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3837] device (eth1): carrier: link connected
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3839] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3843] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (9932a87b-87bf-3422-b00e-c134c1ad07fd) (indicated)
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3843] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3846] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3850] device (eth1): Activation: starting connection 'Wired connection 1' (9932a87b-87bf-3422-b00e-c134c1ad07fd)
Nov 25 09:02:39 np0005534695 systemd[1]: Started Network Manager.
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3853] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3856] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3857] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3858] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3859] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3860] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3861] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3862] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3863] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3867] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3868] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3870] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3871] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3876] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3881] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3899] dhcp4 (eth0): state changed new lease, address=192.168.26.77
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3903] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 09:02:39 np0005534695 systemd[1]: Starting Network Manager Wait Online...
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3923] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3924] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 09:02:39 np0005534695 NetworkManager[7265]: <info>  [1764061359.3927] device (lo): Activation: successful, device activated.
Nov 25 09:02:39 np0005534695 sudo[7251]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:39 np0005534695 python3[7325]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e08-49e2-c32a-ccd7-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:02:40 np0005534695 NetworkManager[7265]: <info>  [1764061360.4084] dhcp6 (eth0): state changed new lease, address=2001:db8::2b6
Nov 25 09:02:40 np0005534695 NetworkManager[7265]: <info>  [1764061360.4093] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 09:02:40 np0005534695 NetworkManager[7265]: <info>  [1764061360.4118] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 09:02:40 np0005534695 NetworkManager[7265]: <info>  [1764061360.4119] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 09:02:40 np0005534695 NetworkManager[7265]: <info>  [1764061360.4121] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 09:02:40 np0005534695 NetworkManager[7265]: <info>  [1764061360.4124] device (eth0): Activation: successful, device activated.
Nov 25 09:02:40 np0005534695 NetworkManager[7265]: <info>  [1764061360.4128] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 09:02:50 np0005534695 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:02:53 np0005534695 systemd[4374]: Starting Mark boot as successful...
Nov 25 09:02:53 np0005534695 systemd[4374]: Finished Mark boot as successful.
Nov 25 09:03:09 np0005534695 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4124] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 09:03:24 np0005534695 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:03:24 np0005534695 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4383] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4385] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4390] device (eth1): Activation: successful, device activated.
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4394] manager: startup complete
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4395] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <warn>  [1764061404.4398] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4402] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 25 09:03:24 np0005534695 systemd[1]: Finished Network Manager Wait Online.
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4514] dhcp4 (eth1): canceled DHCP transaction
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4515] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4515] dhcp4 (eth1): state changed no lease
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4525] policy: auto-activating connection 'ci-private-network' (0e60f29c-c89d-5595-9135-fa0fd01bf23b)
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4528] device (eth1): Activation: starting connection 'ci-private-network' (0e60f29c-c89d-5595-9135-fa0fd01bf23b)
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4529] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4531] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4535] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4541] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4564] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4565] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:03:24 np0005534695 NetworkManager[7265]: <info>  [1764061404.4570] device (eth1): Activation: successful, device activated.
Nov 25 09:03:34 np0005534695 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:03:39 np0005534695 sshd-session[4383]: Received disconnect from 192.168.26.12 port 40978:11: disconnected by user
Nov 25 09:03:39 np0005534695 sshd-session[4383]: Disconnected from user zuul 192.168.26.12 port 40978
Nov 25 09:03:39 np0005534695 sshd-session[4370]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:03:39 np0005534695 systemd-logind[746]: Session 1 logged out. Waiting for processes to exit.
Nov 25 09:03:44 np0005534695 sshd-session[7373]: Accepted publickey for zuul from 192.168.26.12 port 58692 ssh2: RSA SHA256:s7IOmVGBFERPpXYPL/Wxp3ltfNRkS78sM3fXgIDzVB4
Nov 25 09:03:44 np0005534695 systemd-logind[746]: New session 3 of user zuul.
Nov 25 09:03:44 np0005534695 systemd[1]: Started Session 3 of User zuul.
Nov 25 09:03:44 np0005534695 sshd-session[7373]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:03:44 np0005534695 sudo[7452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcfyeidlvgvvuvhaadamcwogjfcqgejv ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 25 09:03:44 np0005534695 sudo[7452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:03:45 np0005534695 python3[7454]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:03:45 np0005534695 sudo[7452]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:45 np0005534695 sudo[7525]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbzigodflywayzcaiwcbsmvoaehmcicj ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 25 09:03:45 np0005534695 sudo[7525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:03:45 np0005534695 python3[7527]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764061424.9192746-379-279395265664593/source _original_basename=tmpox0rdefo follow=False checksum=5493b85a684a9b4806ca892e69594374a0bfd8b8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:03:45 np0005534695 sudo[7525]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:47 np0005534695 sshd-session[7376]: Connection closed by 192.168.26.12 port 58692
Nov 25 09:03:47 np0005534695 sshd-session[7373]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:03:47 np0005534695 systemd[1]: session-3.scope: Deactivated successfully.
Nov 25 09:03:47 np0005534695 systemd-logind[746]: Session 3 logged out. Waiting for processes to exit.
Nov 25 09:03:47 np0005534695 systemd-logind[746]: Removed session 3.
Nov 25 09:05:53 np0005534695 systemd[4374]: Created slice User Background Tasks Slice.
Nov 25 09:05:53 np0005534695 systemd[4374]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 09:05:53 np0005534695 systemd[4374]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 09:08:20 np0005534695 sshd-session[7556]: Accepted publickey for zuul from 192.168.26.12 port 40482 ssh2: RSA SHA256:s7IOmVGBFERPpXYPL/Wxp3ltfNRkS78sM3fXgIDzVB4
Nov 25 09:08:20 np0005534695 systemd-logind[746]: New session 4 of user zuul.
Nov 25 09:08:21 np0005534695 systemd[1]: Started Session 4 of User zuul.
Nov 25 09:08:21 np0005534695 sshd-session[7556]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:08:21 np0005534695 sudo[7583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buojtycyytjvrlicwqmixdirbfvdazvj ; /usr/bin/python3'
Nov 25 09:08:21 np0005534695 sudo[7583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:21 np0005534695 python3[7585]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                             _uses_shell=True zuul_log_id=fa163e08-49e2-292d-b97b-000000001cda-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:08:21 np0005534695 sudo[7583]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:21 np0005534695 sudo[7612]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrvhthnvxlzucearvtcovtkktfqfaiqe ; /usr/bin/python3'
Nov 25 09:08:21 np0005534695 sudo[7612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:21 np0005534695 python3[7614]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:08:21 np0005534695 sudo[7612]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:21 np0005534695 sudo[7638]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvvdkgubxbxkdoiwldbrrrbaayiktsvv ; /usr/bin/python3'
Nov 25 09:08:21 np0005534695 sudo[7638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:21 np0005534695 python3[7640]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:08:21 np0005534695 sudo[7638]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:21 np0005534695 sudo[7664]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpfsmhqbokkkvpcaewqbulghpubcxics ; /usr/bin/python3'
Nov 25 09:08:21 np0005534695 sudo[7664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:21 np0005534695 python3[7666]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:08:21 np0005534695 sudo[7664]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:22 np0005534695 sudo[7690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znjxipinvxwiefqqmqpjuityefytgrmf ; /usr/bin/python3'
Nov 25 09:08:22 np0005534695 sudo[7690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:22 np0005534695 python3[7692]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:08:22 np0005534695 sudo[7690]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:22 np0005534695 sudo[7716]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avtirwwbalclmglifoqztlumebvogycn ; /usr/bin/python3'
Nov 25 09:08:22 np0005534695 sudo[7716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:22 np0005534695 python3[7718]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:08:22 np0005534695 sudo[7716]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:22 np0005534695 sudo[7794]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxvsuffuxpibqvgldtxkejxtdxeelvbi ; /usr/bin/python3'
Nov 25 09:08:22 np0005534695 sudo[7794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:23 np0005534695 python3[7796]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:08:23 np0005534695 sudo[7794]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:23 np0005534695 sudo[7867]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpmgrvdupvysbcgugljzdytauynjwpnu ; /usr/bin/python3'
Nov 25 09:08:23 np0005534695 sudo[7867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:23 np0005534695 python3[7869]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764061702.8542328-512-16589707562235/source _original_basename=tmpri7xxpnh follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:08:23 np0005534695 sudo[7867]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:23 np0005534695 sudo[7917]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmvyerserzjgwjacmdesrnndeswywobr ; /usr/bin/python3'
Nov 25 09:08:23 np0005534695 sudo[7917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:24 np0005534695 python3[7919]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:08:24 np0005534695 systemd[1]: Reloading.
Nov 25 09:08:24 np0005534695 systemd-rc-local-generator[7937]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:08:24 np0005534695 sudo[7917]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:25 np0005534695 sudo[7972]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quuskgelgnviqsxugqilxqhpfeoacwjj ; /usr/bin/python3'
Nov 25 09:08:25 np0005534695 sudo[7972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:25 np0005534695 python3[7974]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 25 09:08:25 np0005534695 sudo[7972]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:25 np0005534695 sudo[7998]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imuixazgnkcxoglhrmkgpypfwijixazi ; /usr/bin/python3'
Nov 25 09:08:25 np0005534695 sudo[7998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:25 np0005534695 python3[8000]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:08:25 np0005534695 sudo[7998]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:25 np0005534695 sudo[8026]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyaahnohueaetzvausamjoyknbkxfxji ; /usr/bin/python3'
Nov 25 09:08:25 np0005534695 sudo[8026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:25 np0005534695 python3[8028]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:08:25 np0005534695 sudo[8026]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:26 np0005534695 sudo[8054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tekjlbjqvzrrjkqvozuukczkiyfislnl ; /usr/bin/python3'
Nov 25 09:08:26 np0005534695 sudo[8054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:26 np0005534695 python3[8056]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:08:26 np0005534695 sudo[8054]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:26 np0005534695 sudo[8082]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdhjivxdmjbtafiqqkkxlcrmaefpjdbx ; /usr/bin/python3'
Nov 25 09:08:26 np0005534695 sudo[8082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:26 np0005534695 python3[8084]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:08:26 np0005534695 sudo[8082]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:26 np0005534695 python3[8111]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                             _uses_shell=True zuul_log_id=fa163e08-49e2-292d-b97b-000000001ce1-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:08:27 np0005534695 python3[8141]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 09:08:29 np0005534695 sshd-session[7559]: Connection closed by 192.168.26.12 port 40482
Nov 25 09:08:29 np0005534695 sshd-session[7556]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:08:29 np0005534695 systemd[1]: session-4.scope: Deactivated successfully.
Nov 25 09:08:29 np0005534695 systemd[1]: session-4.scope: Consumed 3.052s CPU time.
Nov 25 09:08:29 np0005534695 systemd-logind[746]: Session 4 logged out. Waiting for processes to exit.
Nov 25 09:08:29 np0005534695 systemd-logind[746]: Removed session 4.
Nov 25 09:08:31 np0005534695 sshd-session[8146]: Accepted publickey for zuul from 192.168.26.12 port 46590 ssh2: RSA SHA256:s7IOmVGBFERPpXYPL/Wxp3ltfNRkS78sM3fXgIDzVB4
Nov 25 09:08:31 np0005534695 systemd-logind[746]: New session 5 of user zuul.
Nov 25 09:08:31 np0005534695 systemd[1]: Started Session 5 of User zuul.
Nov 25 09:08:31 np0005534695 sshd-session[8146]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:08:31 np0005534695 sudo[8173]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udfkhgrabfjdhtoqqxupspjmxkkvprea ; /usr/bin/python3'
Nov 25 09:08:31 np0005534695 sudo[8173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:31 np0005534695 python3[8175]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 09:08:40 np0005534695 irqbalance[745]: Cannot change IRQ 43 affinity: Operation not permitted
Nov 25 09:08:40 np0005534695 irqbalance[745]: IRQ 43 affinity is now unmanaged
Nov 25 09:08:45 np0005534695 kernel: SELinux:  Converting 387 SID table entries...
Nov 25 09:08:45 np0005534695 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:08:45 np0005534695 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:08:45 np0005534695 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:08:45 np0005534695 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:08:45 np0005534695 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:08:45 np0005534695 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:08:45 np0005534695 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:08:51 np0005534695 kernel: SELinux:  Converting 387 SID table entries...
Nov 25 09:08:51 np0005534695 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:08:51 np0005534695 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:08:51 np0005534695 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:08:51 np0005534695 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:08:51 np0005534695 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:08:51 np0005534695 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:08:51 np0005534695 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:08:58 np0005534695 kernel: SELinux:  Converting 387 SID table entries...
Nov 25 09:08:58 np0005534695 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:08:58 np0005534695 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:08:58 np0005534695 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:08:58 np0005534695 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:08:58 np0005534695 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:08:58 np0005534695 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:08:58 np0005534695 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:08:59 np0005534695 setsebool[8243]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 25 09:08:59 np0005534695 setsebool[8243]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 25 09:09:07 np0005534695 kernel: SELinux:  Converting 390 SID table entries...
Nov 25 09:09:07 np0005534695 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:09:07 np0005534695 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:09:07 np0005534695 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:09:07 np0005534695 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:09:07 np0005534695 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:09:07 np0005534695 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:09:07 np0005534695 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:09:20 np0005534695 dbus-broker-launch[736]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 09:09:20 np0005534695 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:09:20 np0005534695 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:09:20 np0005534695 systemd[1]: Reloading.
Nov 25 09:09:20 np0005534695 systemd-rc-local-generator[8995]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:09:20 np0005534695 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:09:21 np0005534695 sudo[8173]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:35 np0005534695 python3[23711]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                              _uses_shell=True zuul_log_id=fa163e08-49e2-29e8-c457-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:09:36 np0005534695 kernel: evm: overlay not supported
Nov 25 09:09:36 np0005534695 systemd[4374]: Starting D-Bus User Message Bus...
Nov 25 09:09:36 np0005534695 dbus-broker-launch[24515]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 25 09:09:36 np0005534695 dbus-broker-launch[24515]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 25 09:09:36 np0005534695 systemd[4374]: Started D-Bus User Message Bus.
Nov 25 09:09:36 np0005534695 dbus-broker-lau[24515]: Ready
Nov 25 09:09:36 np0005534695 systemd[4374]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 09:09:36 np0005534695 systemd[4374]: Created slice Slice /user.
Nov 25 09:09:36 np0005534695 systemd[4374]: podman-24448.scope: unit configures an IP firewall, but not running as root.
Nov 25 09:09:36 np0005534695 systemd[4374]: (This warning is only shown for the first unit using IP firewalling.)
Nov 25 09:09:36 np0005534695 systemd[4374]: Started podman-24448.scope.
Nov 25 09:09:36 np0005534695 systemd[4374]: Started podman-pause-bbde2b49.scope.
Nov 25 09:09:37 np0005534695 sshd-session[8149]: Connection closed by 192.168.26.12 port 46590
Nov 25 09:09:37 np0005534695 sshd-session[8146]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:09:37 np0005534695 systemd[1]: session-5.scope: Deactivated successfully.
Nov 25 09:09:37 np0005534695 systemd[1]: session-5.scope: Consumed 44.420s CPU time.
Nov 25 09:09:37 np0005534695 systemd-logind[746]: Session 5 logged out. Waiting for processes to exit.
Nov 25 09:09:37 np0005534695 systemd-logind[746]: Removed session 5.
Nov 25 09:09:41 np0005534695 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:09:41 np0005534695 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:09:41 np0005534695 systemd[1]: man-db-cache-update.service: Consumed 26.347s CPU time.
Nov 25 09:09:41 np0005534695 systemd[1]: run-rd17905a8ef464afb8381ebc19bbb942e.service: Deactivated successfully.
Nov 25 09:09:53 np0005534695 sshd-session[29619]: Connection closed by 192.168.26.191 port 53584 [preauth]
Nov 25 09:09:53 np0005534695 sshd-session[29620]: Connection closed by 192.168.26.191 port 53588 [preauth]
Nov 25 09:09:53 np0005534695 sshd-session[29621]: Unable to negotiate with 192.168.26.191 port 53596: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 25 09:09:53 np0005534695 sshd-session[29622]: Unable to negotiate with 192.168.26.191 port 53608: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 25 09:09:53 np0005534695 sshd-session[29623]: Unable to negotiate with 192.168.26.191 port 53614: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 25 09:10:02 np0005534695 sshd-session[29629]: Accepted publickey for zuul from 192.168.26.12 port 39504 ssh2: RSA SHA256:s7IOmVGBFERPpXYPL/Wxp3ltfNRkS78sM3fXgIDzVB4
Nov 25 09:10:02 np0005534695 systemd-logind[746]: New session 6 of user zuul.
Nov 25 09:10:02 np0005534695 systemd[1]: Started Session 6 of User zuul.
Nov 25 09:10:02 np0005534695 sshd-session[29629]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:10:02 np0005534695 python3[29656]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB4HO9/Pb272mGI+U/szIpw/9oHLx4rtGIraRz1dlV41+TJMU38ktCW6c/rIbXW5YjEe8m7up3kNe2OypGHdxy8= zuul@np0005534693
                                              manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:10:02 np0005534695 sudo[29680]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxnvfkkrauvaiohhhpynksjiabsqnpte ; /usr/bin/python3'
Nov 25 09:10:02 np0005534695 sudo[29680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:10:02 np0005534695 python3[29682]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB4HO9/Pb272mGI+U/szIpw/9oHLx4rtGIraRz1dlV41+TJMU38ktCW6c/rIbXW5YjEe8m7up3kNe2OypGHdxy8= zuul@np0005534693
                                              manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:10:02 np0005534695 sudo[29680]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:03 np0005534695 sudo[29706]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jixbbppmobyhswlmqoxdjqzxqaeqsojf ; /usr/bin/python3'
Nov 25 09:10:03 np0005534695 sudo[29706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:10:03 np0005534695 python3[29708]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005534695 update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 25 09:10:03 np0005534695 useradd[29710]: new group: name=cloud-admin, GID=1002
Nov 25 09:10:03 np0005534695 useradd[29710]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 25 09:10:03 np0005534695 sudo[29706]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:03 np0005534695 sudo[29740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypsatgxxibfsnrihnzogrqwixhnfngwr ; /usr/bin/python3'
Nov 25 09:10:03 np0005534695 sudo[29740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:10:03 np0005534695 python3[29742]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB4HO9/Pb272mGI+U/szIpw/9oHLx4rtGIraRz1dlV41+TJMU38ktCW6c/rIbXW5YjEe8m7up3kNe2OypGHdxy8= zuul@np0005534693
                                              manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:10:03 np0005534695 sudo[29740]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:03 np0005534695 sudo[29818]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frtjaftfrzzdedljsfvqzvygbhokgvbi ; /usr/bin/python3'
Nov 25 09:10:03 np0005534695 sudo[29818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:10:03 np0005534695 python3[29820]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:10:03 np0005534695 sudo[29818]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:04 np0005534695 sudo[29891]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfgidmrtqwfrvigqryfkrqmtlijgibhm ; /usr/bin/python3'
Nov 25 09:10:04 np0005534695 sudo[29891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:10:04 np0005534695 python3[29893]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764061803.736595-153-96656989604455/source _original_basename=tmpug85800i follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:10:04 np0005534695 sudo[29891]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:04 np0005534695 sudo[29941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgsxaqhjbrjjyrbmkdehijgulczokgcv ; /usr/bin/python3'
Nov 25 09:10:04 np0005534695 sudo[29941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:10:05 np0005534695 python3[29943]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Nov 25 09:10:05 np0005534695 systemd[1]: Starting Hostname Service...
Nov 25 09:10:05 np0005534695 systemd[1]: Started Hostname Service.
Nov 25 09:10:05 np0005534695 systemd-hostnamed[29947]: Changed pretty hostname to 'compute-1'
Nov 25 09:10:05 compute-1 systemd-hostnamed[29947]: Hostname set to <compute-1> (static)
Nov 25 09:10:05 compute-1 NetworkManager[7265]: <info>  [1764061805.1423] hostname: static hostname changed from "np0005534695" to "compute-1"
Nov 25 09:10:05 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:10:05 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:10:05 compute-1 sudo[29941]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:05 compute-1 sshd-session[29632]: Connection closed by 192.168.26.12 port 39504
Nov 25 09:10:05 compute-1 sshd-session[29629]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:10:05 compute-1 systemd[1]: session-6.scope: Deactivated successfully.
Nov 25 09:10:05 compute-1 systemd[1]: session-6.scope: Consumed 1.637s CPU time.
Nov 25 09:10:05 compute-1 systemd-logind[746]: Session 6 logged out. Waiting for processes to exit.
Nov 25 09:10:05 compute-1 systemd-logind[746]: Removed session 6.
Nov 25 09:10:15 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:10:35 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 09:13:24 compute-1 sshd-session[29966]: Accepted publickey for zuul from 192.168.26.191 port 35848 ssh2: RSA SHA256:s7IOmVGBFERPpXYPL/Wxp3ltfNRkS78sM3fXgIDzVB4
Nov 25 09:13:24 compute-1 systemd-logind[746]: New session 7 of user zuul.
Nov 25 09:13:24 compute-1 systemd[1]: Started Session 7 of User zuul.
Nov 25 09:13:24 compute-1 sshd-session[29966]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:13:24 compute-1 python3[30042]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:13:26 compute-1 sudo[30152]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eesoaxjvvvrdmmpmvueeotjlcjckpxzq ; /usr/bin/python3'
Nov 25 09:13:26 compute-1 sudo[30152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:26 compute-1 python3[30154]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:13:26 compute-1 sudo[30152]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:26 compute-1 sudo[30225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xelrgukuxlappoofcgqegmqunvvyuugj ; /usr/bin/python3'
Nov 25 09:13:26 compute-1 sudo[30225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:26 compute-1 python3[30227]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9603918-34382-151148044847089/source mode=0755 _original_basename=delorean.repo follow=False checksum=cdee622b4b81aba8f448eb3a0d6bf38022474867 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:13:26 compute-1 sudo[30225]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:26 compute-1 sudo[30251]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbbrmuybgjbhjmixtdojabakmvmzzudt ; /usr/bin/python3'
Nov 25 09:13:26 compute-1 sudo[30251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:26 compute-1 python3[30253]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:13:26 compute-1 sudo[30251]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:26 compute-1 sudo[30324]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klhxvizapqpxznkkscxiitgdtymluyme ; /usr/bin/python3'
Nov 25 09:13:26 compute-1 sudo[30324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:26 compute-1 python3[30326]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9603918-34382-151148044847089/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=717d1fa230cffa8c08764d71bd0b4a50d3a90cae backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:13:26 compute-1 sudo[30324]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:26 compute-1 sudo[30350]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kustgflkkihjsobugzpttvmcgjzfphzj ; /usr/bin/python3'
Nov 25 09:13:26 compute-1 sudo[30350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:27 compute-1 python3[30352]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:13:27 compute-1 sudo[30350]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:27 compute-1 sudo[30423]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrlfxrdtiypaauitvsihxfrkuizlpore ; /usr/bin/python3'
Nov 25 09:13:27 compute-1 sudo[30423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:27 compute-1 python3[30425]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9603918-34382-151148044847089/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=8163d09913b97597f86e38eb45c3003e91da783e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:13:27 compute-1 sudo[30423]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:27 compute-1 sudo[30449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntquifyieobijvchymndwwpfjishotbw ; /usr/bin/python3'
Nov 25 09:13:27 compute-1 sudo[30449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:27 compute-1 python3[30451]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:13:27 compute-1 sudo[30449]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:27 compute-1 sudo[30522]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qomuerwznebkkdiypjyjbmprbjkekwzf ; /usr/bin/python3'
Nov 25 09:13:27 compute-1 sudo[30522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:27 compute-1 python3[30524]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9603918-34382-151148044847089/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=d108d0750ad5b288ccc41bc6534ea307cc51e987 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:13:27 compute-1 sudo[30522]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:27 compute-1 sudo[30548]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eutrlrecftefdaspcmqokfrxxpbvwktn ; /usr/bin/python3'
Nov 25 09:13:27 compute-1 sudo[30548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:27 compute-1 python3[30550]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:13:27 compute-1 sudo[30548]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:28 compute-1 sudo[30621]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffbanwapgqxilsbsrwnkfyjeonmxnpkx ; /usr/bin/python3'
Nov 25 09:13:28 compute-1 sudo[30621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:28 compute-1 python3[30623]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9603918-34382-151148044847089/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=20c3917c672c059a872cf09a437f61890d2f89fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:13:28 compute-1 sudo[30621]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:28 compute-1 sudo[30647]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rabrwgpsftpowgvzrmivlczosrzgvqfa ; /usr/bin/python3'
Nov 25 09:13:28 compute-1 sudo[30647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:28 compute-1 python3[30649]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:13:28 compute-1 sudo[30647]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:28 compute-1 sudo[30720]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aejplihhodqlhottghrtknpiykkhopeh ; /usr/bin/python3'
Nov 25 09:13:28 compute-1 sudo[30720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:28 compute-1 python3[30722]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9603918-34382-151148044847089/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=4d14f168e8a0e6930d905faffbcdf4fedd6664d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:13:28 compute-1 sudo[30720]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:28 compute-1 sudo[30746]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yejoepqtawgqsyhjswucnzcgemfnlzwh ; /usr/bin/python3'
Nov 25 09:13:28 compute-1 sudo[30746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:28 compute-1 python3[30748]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:13:28 compute-1 sudo[30746]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:28 compute-1 sudo[30819]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvbepdmmicgclyvzprudwjgvtnmsqabl ; /usr/bin/python3'
Nov 25 09:13:28 compute-1 sudo[30819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:28 compute-1 python3[30821]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9603918-34382-151148044847089/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:13:28 compute-1 sudo[30819]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:38 compute-1 python3[30869]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:15:33 compute-1 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 25 09:15:33 compute-1 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 25 09:15:33 compute-1 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 25 09:15:33 compute-1 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 25 09:16:01 compute-1 anacron[4572]: Job `cron.daily' started
Nov 25 09:16:01 compute-1 anacron[4572]: Job `cron.daily' terminated
Nov 25 09:18:37 compute-1 sshd-session[29969]: Received disconnect from 192.168.26.191 port 35848:11: disconnected by user
Nov 25 09:18:37 compute-1 sshd-session[29969]: Disconnected from user zuul 192.168.26.191 port 35848
Nov 25 09:18:37 compute-1 sshd-session[29966]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:18:37 compute-1 systemd[1]: session-7.scope: Deactivated successfully.
Nov 25 09:18:37 compute-1 systemd[1]: session-7.scope: Consumed 3.251s CPU time.
Nov 25 09:18:37 compute-1 systemd-logind[746]: Session 7 logged out. Waiting for processes to exit.
Nov 25 09:18:37 compute-1 systemd-logind[746]: Removed session 7.
Nov 25 09:23:02 compute-1 sshd-session[30879]: Accepted publickey for zuul from 192.168.122.30 port 44766 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:23:02 compute-1 systemd-logind[746]: New session 8 of user zuul.
Nov 25 09:23:02 compute-1 systemd[1]: Started Session 8 of User zuul.
Nov 25 09:23:02 compute-1 sshd-session[30879]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:23:03 compute-1 python3.9[31032]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:23:04 compute-1 sudo[31211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noxqlmdjnjudlnddiiqtjazukjhfzatq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062583.8621278-57-214882382914394/AnsiballZ_command.py'
Nov 25 09:23:04 compute-1 sudo[31211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:04 compute-1 python3.9[31213]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:23:12 compute-1 sudo[31211]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:12 compute-1 sshd-session[30882]: Connection closed by 192.168.122.30 port 44766
Nov 25 09:23:12 compute-1 sshd-session[30879]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:23:12 compute-1 systemd[1]: session-8.scope: Deactivated successfully.
Nov 25 09:23:12 compute-1 systemd[1]: session-8.scope: Consumed 6.469s CPU time.
Nov 25 09:23:12 compute-1 systemd-logind[746]: Session 8 logged out. Waiting for processes to exit.
Nov 25 09:23:12 compute-1 systemd-logind[746]: Removed session 8.
Nov 25 09:23:28 compute-1 sshd-session[31270]: Accepted publickey for zuul from 192.168.122.30 port 48540 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:23:28 compute-1 systemd-logind[746]: New session 9 of user zuul.
Nov 25 09:23:28 compute-1 systemd[1]: Started Session 9 of User zuul.
Nov 25 09:23:28 compute-1 sshd-session[31270]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:23:28 compute-1 python3.9[31423]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 09:23:29 compute-1 python3.9[31597]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:23:30 compute-1 sudo[31747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtqrpkwrspgyoumgtrjmqrmruvkopotk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062610.062618-94-221054830383539/AnsiballZ_command.py'
Nov 25 09:23:30 compute-1 sudo[31747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:30 compute-1 python3.9[31749]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:23:30 compute-1 sudo[31747]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:31 compute-1 sudo[31900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daqevdcvhbtvegpnzncxlgqusoqeruzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062610.8986876-130-149143822290274/AnsiballZ_stat.py'
Nov 25 09:23:31 compute-1 sudo[31900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:31 compute-1 python3.9[31902]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:23:31 compute-1 sudo[31900]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:31 compute-1 sudo[32052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltmezqxfbpalryaoqvzmvwxmwxrmdkrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062611.5512576-154-148903482367365/AnsiballZ_file.py'
Nov 25 09:23:31 compute-1 sudo[32052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:31 compute-1 python3.9[32054]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:23:32 compute-1 sudo[32052]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:32 compute-1 sudo[32204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raxqdalwslvgrjbwzdohfncrpqwmlrcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062612.168617-178-59035781503456/AnsiballZ_stat.py'
Nov 25 09:23:32 compute-1 sudo[32204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:32 compute-1 python3.9[32206]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:23:32 compute-1 sudo[32204]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:32 compute-1 sudo[32327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkbiuyunxzznlvlqlyijkffwmgrhwmiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062612.168617-178-59035781503456/AnsiballZ_copy.py'
Nov 25 09:23:32 compute-1 sudo[32327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:33 compute-1 python3.9[32329]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062612.168617-178-59035781503456/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:23:33 compute-1 sudo[32327]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:33 compute-1 sudo[32479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdeuhwpelvkxlodywrffblwcztlgptxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062613.1720607-223-156326048853772/AnsiballZ_setup.py'
Nov 25 09:23:33 compute-1 sudo[32479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:33 compute-1 python3.9[32481]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:23:33 compute-1 sudo[32479]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:34 compute-1 sudo[32635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgafluvrpmrqhebjjcvyaianqaovcylv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062613.9276767-247-42414115245918/AnsiballZ_file.py'
Nov 25 09:23:34 compute-1 sudo[32635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:34 compute-1 python3.9[32637]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:23:34 compute-1 sudo[32635]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:34 compute-1 sudo[32787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqqxwcktvsdhfzemeisvydcjltuvbnmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062614.460043-274-97967350920579/AnsiballZ_file.py'
Nov 25 09:23:34 compute-1 sudo[32787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:34 compute-1 python3.9[32789]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:23:34 compute-1 sudo[32787]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:35 compute-1 python3.9[32939]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:23:37 compute-1 python3.9[33192]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:23:38 compute-1 python3.9[33342]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:23:39 compute-1 python3.9[33496]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:23:39 compute-1 sudo[33652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybmkolunyqzckrpbfsetjoxurensjkto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062619.6082187-418-88114764322571/AnsiballZ_setup.py'
Nov 25 09:23:39 compute-1 sudo[33652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:40 compute-1 python3.9[33654]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:23:40 compute-1 sudo[33652]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:40 compute-1 sudo[33736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpmcntoyfjpbirwddryerdsxiftivdjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062619.6082187-418-88114764322571/AnsiballZ_dnf.py'
Nov 25 09:23:40 compute-1 sudo[33736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:40 compute-1 python3.9[33738]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:25:16 compute-1 systemd[1]: Reloading.
Nov 25 09:25:16 compute-1 systemd-rc-local-generator[33937]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:25:16 compute-1 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 25 09:25:17 compute-1 systemd[1]: Reloading.
Nov 25 09:25:17 compute-1 systemd-rc-local-generator[33980]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:25:17 compute-1 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 25 09:25:17 compute-1 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 25 09:25:17 compute-1 systemd[1]: Reloading.
Nov 25 09:25:17 compute-1 systemd-rc-local-generator[34019]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:25:17 compute-1 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 25 09:25:17 compute-1 dbus-broker-launch[726]: Noticed file-system modification, trigger reload.
Nov 25 09:25:17 compute-1 dbus-broker-launch[726]: Noticed file-system modification, trigger reload.
Nov 25 09:25:17 compute-1 dbus-broker-launch[726]: Noticed file-system modification, trigger reload.
Nov 25 09:26:00 compute-1 kernel: SELinux:  Converting 2717 SID table entries...
Nov 25 09:26:00 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:26:00 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:26:00 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:26:00 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:26:00 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:26:00 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:26:00 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:26:00 compute-1 dbus-broker-launch[736]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 25 09:26:01 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:26:01 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:26:01 compute-1 systemd[1]: Reloading.
Nov 25 09:26:01 compute-1 systemd-rc-local-generator[34325]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:26:01 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:26:01 compute-1 sudo[33736]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:01 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:26:01 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:26:01 compute-1 systemd[1]: run-r13ecb2c0542e4e0eb619a0ca83d9e123.service: Deactivated successfully.
Nov 25 09:26:01 compute-1 sudo[35239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptxmdmeracguzdewqddiyleojxdnffsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062761.6860719-455-33139711803447/AnsiballZ_command.py'
Nov 25 09:26:01 compute-1 sudo[35239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:02 compute-1 python3.9[35241]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:26:02 compute-1 sudo[35239]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:03 compute-1 sudo[35520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pibvkpgafvaqgmbepcuujatsgesjeeoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062762.8216534-478-243313273175223/AnsiballZ_selinux.py'
Nov 25 09:26:03 compute-1 sudo[35520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:03 compute-1 python3.9[35522]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 09:26:03 compute-1 sudo[35520]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:04 compute-1 sudo[35672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlavwtuefzcisifmrbyifzbyskjfdrdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062763.9121046-511-205127341856286/AnsiballZ_command.py'
Nov 25 09:26:04 compute-1 sudo[35672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:04 compute-1 python3.9[35674]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 09:26:04 compute-1 sudo[35672]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:05 compute-1 sudo[35825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsdlkozzbzdwmugazfpdfxoblfijkpts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062765.0070918-535-203221967905064/AnsiballZ_file.py'
Nov 25 09:26:05 compute-1 sudo[35825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:05 compute-1 python3.9[35827]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:26:05 compute-1 sudo[35825]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:06 compute-1 sudo[35977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asauxbypjqtwgvewqmscoxazdmidclhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062766.137972-559-200860648253712/AnsiballZ_mount.py'
Nov 25 09:26:06 compute-1 sudo[35977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:06 compute-1 python3.9[35979]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 09:26:06 compute-1 sudo[35977]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:07 compute-1 sudo[36129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxninrgtqdduojuikgykcnsbzwqjbfyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062767.5553114-643-188783679165780/AnsiballZ_file.py'
Nov 25 09:26:07 compute-1 sudo[36129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:07 compute-1 python3.9[36131]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:26:07 compute-1 sudo[36129]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:08 compute-1 sudo[36281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sztikhigpoqhmdhhcvypyogvzyeiongy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062768.0686352-667-3492378362800/AnsiballZ_stat.py'
Nov 25 09:26:08 compute-1 sudo[36281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:08 compute-1 python3.9[36283]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:26:08 compute-1 sudo[36281]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:08 compute-1 sudo[36404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqumlmpjpdnjsqxqpkmbyxjzlvngbfps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062768.0686352-667-3492378362800/AnsiballZ_copy.py'
Nov 25 09:26:08 compute-1 sudo[36404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:08 compute-1 python3.9[36406]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062768.0686352-667-3492378362800/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:26:08 compute-1 sudo[36404]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:12 compute-1 sudo[36556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vktsnezgabtdqhhkvtfimxjebodozpri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062772.1288953-739-250112637537235/AnsiballZ_stat.py'
Nov 25 09:26:12 compute-1 sudo[36556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:12 compute-1 python3.9[36558]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:26:12 compute-1 sudo[36556]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:13 compute-1 sudo[36708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqhrzeibwnbgqfedhqftsgjtarphynpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062773.0004294-763-227969015218293/AnsiballZ_command.py'
Nov 25 09:26:13 compute-1 sudo[36708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:13 compute-1 python3.9[36710]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:26:13 compute-1 sudo[36708]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:13 compute-1 sudo[36861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrgjfbytfajniigtbiqpbambeqlrueze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062773.5348415-787-21256653910223/AnsiballZ_file.py'
Nov 25 09:26:13 compute-1 sudo[36861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:13 compute-1 python3.9[36863]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:26:13 compute-1 sudo[36861]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:14 compute-1 sudo[37013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rghwfgqudssrjfolvxkhpnzjehabjdek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062774.3064275-820-239993653973096/AnsiballZ_getent.py'
Nov 25 09:26:14 compute-1 sudo[37013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:14 compute-1 python3.9[37015]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 09:26:14 compute-1 sudo[37013]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:15 compute-1 sudo[37166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnqyrocqovavmdwzkeaysqnbzmtnrsga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062774.8890598-844-111564279539628/AnsiballZ_group.py'
Nov 25 09:26:15 compute-1 sudo[37166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:15 compute-1 python3.9[37168]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 09:26:15 compute-1 groupadd[37169]: group added to /etc/group: name=qemu, GID=107
Nov 25 09:26:15 compute-1 rsyslogd[963]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:26:15 compute-1 groupadd[37169]: group added to /etc/gshadow: name=qemu
Nov 25 09:26:15 compute-1 groupadd[37169]: new group: name=qemu, GID=107
Nov 25 09:26:15 compute-1 sudo[37166]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:15 compute-1 sudo[37325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzvidatayhpdbtlqtouypfyhghdjlawp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062775.598981-868-91893224875349/AnsiballZ_user.py'
Nov 25 09:26:15 compute-1 sudo[37325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:16 compute-1 python3.9[37327]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 09:26:16 compute-1 useradd[37329]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 09:26:16 compute-1 sudo[37325]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:16 compute-1 sudo[37485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzkarnaabytanibwmgnrprrirpxtfqnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062776.3487773-892-20772972259852/AnsiballZ_getent.py'
Nov 25 09:26:16 compute-1 sudo[37485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:16 compute-1 python3.9[37487]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 09:26:16 compute-1 sudo[37485]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:17 compute-1 sudo[37638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndltsmgvkpjqqjdtnsbpgbpjcgzfbcmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062776.8408327-916-279119926167932/AnsiballZ_group.py'
Nov 25 09:26:17 compute-1 sudo[37638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:17 compute-1 python3.9[37640]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 09:26:17 compute-1 groupadd[37641]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 25 09:26:17 compute-1 groupadd[37641]: group added to /etc/gshadow: name=hugetlbfs
Nov 25 09:26:17 compute-1 groupadd[37641]: new group: name=hugetlbfs, GID=42477
Nov 25 09:26:17 compute-1 sudo[37638]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:17 compute-1 sudo[37796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ietqbpxzlavourbnbvrpqotdckthyuhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062777.429641-943-181614994311150/AnsiballZ_file.py'
Nov 25 09:26:17 compute-1 sudo[37796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:17 compute-1 python3.9[37798]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 09:26:17 compute-1 sudo[37796]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:18 compute-1 sudo[37948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiburpfguquwbzhdiaqajuqpimhzqyas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062778.1861537-976-27157742016326/AnsiballZ_dnf.py'
Nov 25 09:26:18 compute-1 sudo[37948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:18 compute-1 python3.9[37950]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:26:19 compute-1 sudo[37948]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:20 compute-1 sudo[38101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhvwlouzbaxrnewudxbdezxkdzxfdhyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062779.9782605-1000-280819283980917/AnsiballZ_file.py'
Nov 25 09:26:20 compute-1 sudo[38101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:20 compute-1 python3.9[38103]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:26:20 compute-1 sudo[38101]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:20 compute-1 sudo[38253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sabezknevuffznrgaozjsswknbmdqddd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062780.4806733-1024-88615962348906/AnsiballZ_stat.py'
Nov 25 09:26:20 compute-1 sudo[38253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:20 compute-1 python3.9[38255]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:26:20 compute-1 sudo[38253]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:21 compute-1 sudo[38376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnzlnldwkzfyzefvtsaqbslqujhohhbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062780.4806733-1024-88615962348906/AnsiballZ_copy.py'
Nov 25 09:26:21 compute-1 sudo[38376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:21 compute-1 python3.9[38378]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764062780.4806733-1024-88615962348906/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:26:21 compute-1 sudo[38376]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:21 compute-1 sudo[38528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbaatenbsyqsyetxmvygkqngbohreqji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062781.3681505-1069-2654129777803/AnsiballZ_systemd.py'
Nov 25 09:26:21 compute-1 sudo[38528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:22 compute-1 python3.9[38530]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:26:22 compute-1 systemd[1]: Starting Load Kernel Modules...
Nov 25 09:26:22 compute-1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 25 09:26:22 compute-1 kernel: Bridge firewalling registered
Nov 25 09:26:22 compute-1 systemd-modules-load[38534]: Inserted module 'br_netfilter'
Nov 25 09:26:22 compute-1 systemd[1]: Finished Load Kernel Modules.
Nov 25 09:26:22 compute-1 sudo[38528]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:22 compute-1 sudo[38687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iujbftjojzgpqwlikkmonljwspqslepc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062782.2520225-1093-129528549870402/AnsiballZ_stat.py'
Nov 25 09:26:22 compute-1 sudo[38687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:22 compute-1 python3.9[38689]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:26:22 compute-1 sudo[38687]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:22 compute-1 sudo[38810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwejysfjdbomoglcdgklnemdkyesbkua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062782.2520225-1093-129528549870402/AnsiballZ_copy.py'
Nov 25 09:26:22 compute-1 sudo[38810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:22 compute-1 python3.9[38812]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764062782.2520225-1093-129528549870402/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:26:22 compute-1 sudo[38810]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:23 compute-1 sudo[38962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijovyrmzqteqgaxwhyjequmikoywqrcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062783.385154-1147-22313360279418/AnsiballZ_dnf.py'
Nov 25 09:26:23 compute-1 sudo[38962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:23 compute-1 python3.9[38964]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:26:27 compute-1 dbus-broker-launch[726]: Noticed file-system modification, trigger reload.
Nov 25 09:26:27 compute-1 dbus-broker-launch[726]: Noticed file-system modification, trigger reload.
Nov 25 09:26:27 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:26:27 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:26:27 compute-1 systemd[1]: Reloading.
Nov 25 09:26:28 compute-1 systemd-rc-local-generator[39021]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:26:28 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:26:28 compute-1 sudo[38962]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:30 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:26:30 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:26:30 compute-1 systemd[1]: man-db-cache-update.service: Consumed 3.013s CPU time.
Nov 25 09:26:30 compute-1 systemd[1]: run-r6378281635834cbdbfe0ff1aecff89b3.service: Deactivated successfully.
Nov 25 09:26:31 compute-1 python3.9[42679]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:26:31 compute-1 python3.9[42831]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 09:26:32 compute-1 python3.9[42981]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:26:32 compute-1 sudo[43131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlsrkxejvvwwnniessbjkrblowvbnrig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062792.7900755-1264-153061003007203/AnsiballZ_command.py'
Nov 25 09:26:32 compute-1 sudo[43131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:33 compute-1 python3.9[43133]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:26:33 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 09:26:33 compute-1 systemd[1]: Starting Authorization Manager...
Nov 25 09:26:33 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 09:26:33 compute-1 polkitd[43350]: Started polkitd version 0.117
Nov 25 09:26:33 compute-1 polkitd[43350]: Loading rules from directory /etc/polkit-1/rules.d
Nov 25 09:26:33 compute-1 polkitd[43350]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 25 09:26:33 compute-1 polkitd[43350]: Finished loading, compiling and executing 2 rules
Nov 25 09:26:33 compute-1 systemd[1]: Started Authorization Manager.
Nov 25 09:26:33 compute-1 polkitd[43350]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 25 09:26:33 compute-1 sudo[43131]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:34 compute-1 sudo[43514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uihbincyqrofskstxtkxzddsohchxbhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062793.8175204-1291-273437537328022/AnsiballZ_systemd.py'
Nov 25 09:26:34 compute-1 sudo[43514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:34 compute-1 python3.9[43516]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:26:34 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 09:26:34 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 09:26:34 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 09:26:34 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 09:26:34 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 09:26:34 compute-1 sudo[43514]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:35 compute-1 python3.9[43677]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 09:26:38 compute-1 sudo[43827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akklbfkhlrfknocsezbvgjgibqsnxdov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062797.9193175-1462-150947096510234/AnsiballZ_systemd.py'
Nov 25 09:26:38 compute-1 sudo[43827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:38 compute-1 python3.9[43829]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:26:38 compute-1 systemd[1]: Reloading.
Nov 25 09:26:38 compute-1 systemd-rc-local-generator[43852]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:26:38 compute-1 sudo[43827]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:38 compute-1 sudo[44016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlcebnqwfkewjrviwwqfitmlytxjdkit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062798.6339014-1462-14005999216333/AnsiballZ_systemd.py'
Nov 25 09:26:38 compute-1 sudo[44016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:39 compute-1 python3.9[44018]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:26:39 compute-1 systemd[1]: Reloading.
Nov 25 09:26:39 compute-1 systemd-rc-local-generator[44040]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:26:39 compute-1 sudo[44016]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:39 compute-1 sudo[44205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-berrpurmdmykaerczuoerltdsptrqefs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062799.6121447-1510-147175780467692/AnsiballZ_command.py'
Nov 25 09:26:39 compute-1 sudo[44205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:39 compute-1 python3.9[44207]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:26:39 compute-1 sudo[44205]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:40 compute-1 sudo[44358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrjbzkgdocuzuwyrxxxuwphsjagfzhwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062800.120874-1534-281381538030757/AnsiballZ_command.py'
Nov 25 09:26:40 compute-1 sudo[44358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:40 compute-1 python3.9[44360]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:26:40 compute-1 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 25 09:26:40 compute-1 sudo[44358]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:40 compute-1 sudo[44511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gazescxjdlmtlvasxwidfkwghhllbreu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062800.6227572-1558-134212357763877/AnsiballZ_command.py'
Nov 25 09:26:40 compute-1 sudo[44511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:40 compute-1 python3.9[44513]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:26:42 compute-1 sudo[44511]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:42 compute-1 sudo[44673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxynbfpzdnlicbnaemivchzulxbbuofa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062802.169011-1582-135469006970448/AnsiballZ_command.py'
Nov 25 09:26:42 compute-1 sudo[44673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:42 compute-1 python3.9[44675]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:26:42 compute-1 sudo[44673]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:42 compute-1 sudo[44826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgkhknjljvkbakvpfxlgmuxaraxhylfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062802.669573-1606-89412981785823/AnsiballZ_systemd.py'
Nov 25 09:26:42 compute-1 sudo[44826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:43 compute-1 python3.9[44828]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:26:43 compute-1 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 09:26:43 compute-1 systemd[1]: Stopped Apply Kernel Variables.
Nov 25 09:26:43 compute-1 systemd[1]: Stopping Apply Kernel Variables...
Nov 25 09:26:43 compute-1 systemd[1]: Starting Apply Kernel Variables...
Nov 25 09:26:43 compute-1 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 09:26:43 compute-1 systemd[1]: Finished Apply Kernel Variables.
Nov 25 09:26:43 compute-1 sudo[44826]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:43 compute-1 sshd-session[31273]: Connection closed by 192.168.122.30 port 48540
Nov 25 09:26:43 compute-1 sshd-session[31270]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:26:43 compute-1 systemd[1]: session-9.scope: Deactivated successfully.
Nov 25 09:26:43 compute-1 systemd[1]: session-9.scope: Consumed 1min 37.431s CPU time.
Nov 25 09:26:43 compute-1 systemd-logind[746]: Session 9 logged out. Waiting for processes to exit.
Nov 25 09:26:43 compute-1 systemd-logind[746]: Removed session 9.
Nov 25 09:26:49 compute-1 sshd-session[44858]: Accepted publickey for zuul from 192.168.122.30 port 41686 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:26:49 compute-1 systemd-logind[746]: New session 10 of user zuul.
Nov 25 09:26:49 compute-1 systemd[1]: Started Session 10 of User zuul.
Nov 25 09:26:49 compute-1 sshd-session[44858]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:26:50 compute-1 python3.9[45011]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:26:51 compute-1 sudo[45165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ionzqqjdboehxsymstzsnyedohjeiqce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062810.7121263-69-126224183206805/AnsiballZ_getent.py'
Nov 25 09:26:51 compute-1 sudo[45165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:51 compute-1 python3.9[45167]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 09:26:51 compute-1 sudo[45165]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:51 compute-1 sudo[45318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-araybwvbsjokewagbajjxdgjwgdpzbjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062811.3817277-93-16310217356454/AnsiballZ_group.py'
Nov 25 09:26:51 compute-1 sudo[45318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:51 compute-1 python3.9[45320]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 09:26:51 compute-1 groupadd[45321]: group added to /etc/group: name=openvswitch, GID=42476
Nov 25 09:26:51 compute-1 groupadd[45321]: group added to /etc/gshadow: name=openvswitch
Nov 25 09:26:51 compute-1 groupadd[45321]: new group: name=openvswitch, GID=42476
Nov 25 09:26:51 compute-1 sudo[45318]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:52 compute-1 sudo[45476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhycnzdvykifqxwskignmgabszhsosoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062812.0386999-117-246549613426394/AnsiballZ_user.py'
Nov 25 09:26:52 compute-1 sudo[45476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:52 compute-1 python3.9[45478]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 09:26:52 compute-1 useradd[45480]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 09:26:52 compute-1 useradd[45480]: add 'openvswitch' to group 'hugetlbfs'
Nov 25 09:26:52 compute-1 useradd[45480]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 25 09:26:52 compute-1 sudo[45476]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:53 compute-1 sudo[45636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htfsgpubxojwkyyigpagvumzowjqlsvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062812.9496434-147-279109475438073/AnsiballZ_setup.py'
Nov 25 09:26:53 compute-1 sudo[45636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:53 compute-1 python3.9[45638]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:26:53 compute-1 sudo[45636]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:53 compute-1 sudo[45720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugbdjpfngiwrotgdkpeaomxpsuvvjqbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062812.9496434-147-279109475438073/AnsiballZ_dnf.py'
Nov 25 09:26:53 compute-1 sudo[45720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:54 compute-1 python3.9[45722]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 09:26:57 compute-1 sudo[45720]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:32 compute-1 sudo[45884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifhmizkcjhmggtudnesypgzefbfzovti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062852.5409615-189-237582041775558/AnsiballZ_dnf.py'
Nov 25 09:27:32 compute-1 sudo[45884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:32 compute-1 python3.9[45886]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:27:40 compute-1 kernel: SELinux:  Converting 2729 SID table entries...
Nov 25 09:27:40 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:27:40 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:27:40 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:27:40 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:27:40 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:27:40 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:27:40 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:27:40 compute-1 groupadd[45909]: group added to /etc/group: name=unbound, GID=993
Nov 25 09:27:40 compute-1 groupadd[45909]: group added to /etc/gshadow: name=unbound
Nov 25 09:27:41 compute-1 groupadd[45909]: new group: name=unbound, GID=993
Nov 25 09:27:41 compute-1 useradd[45916]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 25 09:27:41 compute-1 dbus-broker-launch[736]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 25 09:27:41 compute-1 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 25 09:27:41 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:27:41 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:27:41 compute-1 systemd[1]: Reloading.
Nov 25 09:27:41 compute-1 systemd-sysv-generator[46419]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:27:41 compute-1 systemd-rc-local-generator[46407]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:27:42 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:27:42 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:27:42 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:27:42 compute-1 systemd[1]: run-rb829a6fe762a4178bb7e6c0ca4f6b3ae.service: Deactivated successfully.
Nov 25 09:27:42 compute-1 sudo[45884]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:43 compute-1 sudo[46982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onpivethlrxvuieesdjkdwucvkgxuogc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062862.7514377-213-140229794880605/AnsiballZ_systemd.py'
Nov 25 09:27:43 compute-1 sudo[46982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:43 compute-1 python3.9[46984]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:27:43 compute-1 systemd[1]: Reloading.
Nov 25 09:27:43 compute-1 systemd-sysv-generator[47011]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:27:43 compute-1 systemd-rc-local-generator[47008]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:27:43 compute-1 systemd[1]: Starting Open vSwitch Database Unit...
Nov 25 09:27:43 compute-1 chown[47026]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 25 09:27:43 compute-1 ovs-ctl[47031]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 25 09:27:43 compute-1 ovs-ctl[47031]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 25 09:27:43 compute-1 ovs-ctl[47031]: Starting ovsdb-server [  OK  ]
Nov 25 09:27:43 compute-1 ovs-vsctl[47080]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 25 09:27:43 compute-1 ovs-vsctl[47100]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"ad0cdb86-b3c6-44c6-a890-1db2efa57d2b\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 25 09:27:43 compute-1 ovs-ctl[47031]: Configuring Open vSwitch system IDs [  OK  ]
Nov 25 09:27:43 compute-1 ovs-ctl[47031]: Enabling remote OVSDB managers [  OK  ]
Nov 25 09:27:43 compute-1 systemd[1]: Started Open vSwitch Database Unit.
Nov 25 09:27:43 compute-1 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 25 09:27:43 compute-1 ovs-vsctl[47106]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Nov 25 09:27:43 compute-1 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 25 09:27:43 compute-1 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 25 09:27:43 compute-1 kernel: openvswitch: Open vSwitch switching datapath
Nov 25 09:27:43 compute-1 ovs-ctl[47151]: Inserting openvswitch module [  OK  ]
Nov 25 09:27:43 compute-1 ovs-ctl[47120]: Starting ovs-vswitchd [  OK  ]
Nov 25 09:27:43 compute-1 ovs-ctl[47120]: Enabling remote OVSDB managers [  OK  ]
Nov 25 09:27:43 compute-1 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 25 09:27:43 compute-1 ovs-vsctl[47169]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Nov 25 09:27:43 compute-1 systemd[1]: Starting Open vSwitch...
Nov 25 09:27:43 compute-1 systemd[1]: Finished Open vSwitch.
Nov 25 09:27:44 compute-1 sudo[46982]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:44 compute-1 python3.9[47320]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:27:45 compute-1 sudo[47470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzrgdwaejjnkoqpazonkqldmkugpqalx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062864.8637817-267-25920640181322/AnsiballZ_sefcontext.py'
Nov 25 09:27:45 compute-1 sudo[47470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:45 compute-1 python3.9[47472]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 09:27:46 compute-1 kernel: SELinux:  Converting 2743 SID table entries...
Nov 25 09:27:46 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:27:46 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:27:46 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:27:46 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:27:46 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:27:46 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:27:46 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:27:46 compute-1 sudo[47470]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:46 compute-1 python3.9[47627]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:27:47 compute-1 sudo[47783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmdfhmwosbfsgajpdbgwgjchniwulibu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062867.3026388-321-16575495132439/AnsiballZ_dnf.py'
Nov 25 09:27:47 compute-1 dbus-broker-launch[736]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 25 09:27:47 compute-1 sudo[47783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:47 compute-1 python3.9[47785]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:27:48 compute-1 sudo[47783]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:49 compute-1 sudo[47936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnmdwwfdkxhbahhbizcpnohscxcazrvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062868.802568-345-181678548278794/AnsiballZ_command.py'
Nov 25 09:27:49 compute-1 sudo[47936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:49 compute-1 python3.9[47938]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:27:49 compute-1 sudo[47936]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:50 compute-1 sudo[48223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxttvcqkntjucxukuebqxycbjxoldonc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062869.8699677-369-28150539692665/AnsiballZ_file.py'
Nov 25 09:27:50 compute-1 sudo[48223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:50 compute-1 python3.9[48225]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 09:27:50 compute-1 sudo[48223]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:50 compute-1 python3.9[48375]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:27:51 compute-1 sudo[48527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-capvaofwkwiaycpdqczgaivgjneyxyhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062871.052284-417-214201045082300/AnsiballZ_dnf.py'
Nov 25 09:27:51 compute-1 sudo[48527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:51 compute-1 python3.9[48529]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:27:54 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:27:54 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:27:54 compute-1 systemd[1]: Reloading.
Nov 25 09:27:54 compute-1 systemd-rc-local-generator[48562]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:27:54 compute-1 systemd-sysv-generator[48568]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:27:54 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:27:54 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:27:54 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:27:54 compute-1 systemd[1]: run-r2b1c8286e250413ebd0f57d7fe486cb9.service: Deactivated successfully.
Nov 25 09:27:54 compute-1 sudo[48527]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:54 compute-1 sudo[48844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlvpkjlahftafaldlpshazgccttxtdey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062874.7660053-441-145988084893902/AnsiballZ_systemd.py'
Nov 25 09:27:54 compute-1 sudo[48844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:55 compute-1 python3.9[48846]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:27:55 compute-1 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 09:27:55 compute-1 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 09:27:55 compute-1 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 09:27:55 compute-1 systemd[1]: Stopping Network Manager...
Nov 25 09:27:55 compute-1 NetworkManager[7265]: <info>  [1764062875.2478] caught SIGTERM, shutting down normally.
Nov 25 09:27:55 compute-1 NetworkManager[7265]: <info>  [1764062875.2489] dhcp4 (eth0): canceled DHCP transaction
Nov 25 09:27:55 compute-1 NetworkManager[7265]: <info>  [1764062875.2489] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:27:55 compute-1 NetworkManager[7265]: <info>  [1764062875.2489] dhcp4 (eth0): state changed no lease
Nov 25 09:27:55 compute-1 NetworkManager[7265]: <info>  [1764062875.2491] dhcp6 (eth0): canceled DHCP transaction
Nov 25 09:27:55 compute-1 NetworkManager[7265]: <info>  [1764062875.2491] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:27:55 compute-1 NetworkManager[7265]: <info>  [1764062875.2491] dhcp6 (eth0): state changed no lease
Nov 25 09:27:55 compute-1 NetworkManager[7265]: <info>  [1764062875.2493] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 09:27:55 compute-1 NetworkManager[7265]: <info>  [1764062875.2521] exiting (success)
Nov 25 09:27:55 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:27:55 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:27:55 compute-1 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 09:27:55 compute-1 systemd[1]: Stopped Network Manager.
Nov 25 09:27:55 compute-1 systemd[1]: Starting Network Manager...
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3054] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:2d6bcb16-37ad-4149-af57-9c34e1d5b606)
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3059] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3106] manager[0x5595301e1010]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 09:27:55 compute-1 systemd[1]: Starting Hostname Service...
Nov 25 09:27:55 compute-1 systemd[1]: Started Hostname Service.
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3718] hostname: hostname: using hostnamed
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3719] hostname: static hostname changed from (none) to "compute-1"
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3722] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3725] manager[0x5595301e1010]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3726] manager[0x5595301e1010]: rfkill: WWAN hardware radio set enabled
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3741] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3748] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3748] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3749] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3750] manager: Networking is enabled by state file
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3752] settings: Loaded settings plugin: keyfile (internal)
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3755] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3779] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3786] dhcp: init: Using DHCP client 'internal'
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3788] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3792] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3797] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3803] device (lo): Activation: starting connection 'lo' (5c7fd779-ef98-4a55-a168-3fc860cb7264)
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3809] device (eth0): carrier: link connected
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3812] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3816] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3817] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3822] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3827] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3831] device (eth1): carrier: link connected
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3835] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3838] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (0e60f29c-c89d-5595-9135-fa0fd01bf23b) (indicated)
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3839] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3843] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3848] device (eth1): Activation: starting connection 'ci-private-network' (0e60f29c-c89d-5595-9135-fa0fd01bf23b)
Nov 25 09:27:55 compute-1 systemd[1]: Started Network Manager.
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3855] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3863] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3866] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3867] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3869] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3872] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3874] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3877] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3879] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3885] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3896] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3899] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3905] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3908] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3913] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3918] dhcp4 (eth0): state changed new lease, address=192.168.26.77
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3923] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3945] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3948] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3952] device (lo): Activation: successful, device activated.
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3960] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3962] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3964] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 25 09:27:55 compute-1 NetworkManager[48856]: <info>  [1764062875.3966] device (eth1): Activation: successful, device activated.
Nov 25 09:27:55 compute-1 systemd[1]: Starting Network Manager Wait Online...
Nov 25 09:27:55 compute-1 sudo[48844]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:55 compute-1 sudo[49053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofkzrzaljphuemyplojyouwlozvoisjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062875.5684414-465-48421830617281/AnsiballZ_dnf.py'
Nov 25 09:27:55 compute-1 sudo[49053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:55 compute-1 python3.9[49055]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:27:56 compute-1 NetworkManager[48856]: <info>  [1764062876.4778] dhcp6 (eth0): state changed new lease, address=2001:db8::2b6
Nov 25 09:27:56 compute-1 NetworkManager[48856]: <info>  [1764062876.4791] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 09:27:56 compute-1 NetworkManager[48856]: <info>  [1764062876.4822] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 09:27:56 compute-1 NetworkManager[48856]: <info>  [1764062876.4824] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 09:27:56 compute-1 NetworkManager[48856]: <info>  [1764062876.4826] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 09:27:56 compute-1 NetworkManager[48856]: <info>  [1764062876.4831] device (eth0): Activation: successful, device activated.
Nov 25 09:27:56 compute-1 NetworkManager[48856]: <info>  [1764062876.4835] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 09:27:56 compute-1 NetworkManager[48856]: <info>  [1764062876.4851] manager: startup complete
Nov 25 09:27:56 compute-1 systemd[1]: Finished Network Manager Wait Online.
Nov 25 09:28:01 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:28:01 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:28:01 compute-1 systemd[1]: Reloading.
Nov 25 09:28:01 compute-1 systemd-rc-local-generator[49124]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:28:01 compute-1 systemd-sysv-generator[49128]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:28:01 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:28:01 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:28:01 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:28:01 compute-1 systemd[1]: run-re50e575cae4a4bf0ade1dc4c31f13c4c.service: Deactivated successfully.
Nov 25 09:28:02 compute-1 sudo[49053]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:04 compute-1 sudo[49531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymhjqyhshenyvtmmuihdgrceoqawloju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062884.0863533-501-197932144291484/AnsiballZ_stat.py'
Nov 25 09:28:04 compute-1 sudo[49531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:04 compute-1 python3.9[49533]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:28:04 compute-1 sudo[49531]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:05 compute-1 sudo[49683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjeparsxzarffsmbqtzqutfzigvgsact ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062884.6143537-528-215233271764600/AnsiballZ_ini_file.py'
Nov 25 09:28:05 compute-1 sudo[49683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:05 compute-1 python3.9[49685]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:05 compute-1 sudo[49683]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:05 compute-1 sudo[49837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xipnxvyjqijronrdsjnxpdmetouyocyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062885.5231771-558-16269276215347/AnsiballZ_ini_file.py'
Nov 25 09:28:05 compute-1 sudo[49837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:05 compute-1 python3.9[49839]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:05 compute-1 sudo[49837]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:06 compute-1 sudo[49989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmsxcopdvcevrqvqjmjssefankaeaqax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062885.9558449-558-202146444620048/AnsiballZ_ini_file.py'
Nov 25 09:28:06 compute-1 sudo[49989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:06 compute-1 python3.9[49991]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:06 compute-1 sudo[49989]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:06 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:28:06 compute-1 sudo[50143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxrspbicpmbdxsgfofozfmvcwcvsunzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062886.5280874-603-237800107798390/AnsiballZ_ini_file.py'
Nov 25 09:28:06 compute-1 sudo[50143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:06 compute-1 python3.9[50145]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:06 compute-1 sudo[50143]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:07 compute-1 sudo[50295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrmrbejefdkbuiyqzgrxayxoeazxuloq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062886.9676168-603-108532174703829/AnsiballZ_ini_file.py'
Nov 25 09:28:07 compute-1 sudo[50295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:07 compute-1 python3.9[50297]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:07 compute-1 sudo[50295]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:07 compute-1 sudo[50447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkbltavhrdedehatilnymkplktqbnnkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062887.4799666-648-250340777493038/AnsiballZ_stat.py'
Nov 25 09:28:07 compute-1 sudo[50447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:07 compute-1 python3.9[50449]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:28:07 compute-1 sudo[50447]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:08 compute-1 sudo[50570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldebvltofngpvxnuopnjoecpwtojcinp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062887.4799666-648-250340777493038/AnsiballZ_copy.py'
Nov 25 09:28:08 compute-1 sudo[50570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:08 compute-1 python3.9[50572]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062887.4799666-648-250340777493038/.source _original_basename=.k4gbpzc2 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:08 compute-1 sudo[50570]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:08 compute-1 sudo[50722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqofzxmsuqwjudpztdkgnaqmtoxlbhos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062888.4861507-693-72771476946423/AnsiballZ_file.py'
Nov 25 09:28:08 compute-1 sudo[50722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:08 compute-1 python3.9[50724]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:08 compute-1 sudo[50722]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:09 compute-1 sudo[50874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckvrqjauvkbxofwoxasonowejaznvapt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062888.9922767-717-9337918212048/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 25 09:28:09 compute-1 sudo[50874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:09 compute-1 python3.9[50876]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 25 09:28:09 compute-1 sudo[50874]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:10 compute-1 sudo[51026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcodthyeqflueggmduiqrravohfkyebu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062889.850975-744-249550594224886/AnsiballZ_file.py'
Nov 25 09:28:10 compute-1 sudo[51026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:10 compute-1 python3.9[51028]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:10 compute-1 sudo[51026]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:10 compute-1 sudo[51178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfjnnvuxdputhomxqmtrhnphamtfwtqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062890.4953763-774-192683824160627/AnsiballZ_stat.py'
Nov 25 09:28:10 compute-1 sudo[51178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:10 compute-1 sudo[51178]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:11 compute-1 sudo[51301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlepetbqzwicjwlgwchzoywgpwbbhygp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062890.4953763-774-192683824160627/AnsiballZ_copy.py'
Nov 25 09:28:11 compute-1 sudo[51301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:11 compute-1 sudo[51301]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:11 compute-1 sudo[51453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlhggxywyqfepxflptcqikwilhovtwry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062891.3691738-819-200172246541963/AnsiballZ_slurp.py'
Nov 25 09:28:11 compute-1 sudo[51453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:11 compute-1 python3.9[51455]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 25 09:28:11 compute-1 sudo[51453]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:12 compute-1 sudo[51628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrpdlxnlxohosoihauxkfwtrngnobbhb ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062892.056182-846-205319898757112/async_wrapper.py j56206101451 300 /home/zuul/.ansible/tmp/ansible-tmp-1764062892.056182-846-205319898757112/AnsiballZ_edpm_os_net_config.py _'
Nov 25 09:28:12 compute-1 sudo[51628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:12 compute-1 ansible-async_wrapper.py[51630]: Invoked with j56206101451 300 /home/zuul/.ansible/tmp/ansible-tmp-1764062892.056182-846-205319898757112/AnsiballZ_edpm_os_net_config.py _
Nov 25 09:28:12 compute-1 ansible-async_wrapper.py[51633]: Starting module and watcher
Nov 25 09:28:12 compute-1 ansible-async_wrapper.py[51633]: Start watching 51634 (300)
Nov 25 09:28:12 compute-1 ansible-async_wrapper.py[51634]: Start module (51634)
Nov 25 09:28:12 compute-1 ansible-async_wrapper.py[51630]: Return async_wrapper task started.
Nov 25 09:28:12 compute-1 sudo[51628]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:12 compute-1 python3.9[51635]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 25 09:28:13 compute-1 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 25 09:28:13 compute-1 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 25 09:28:13 compute-1 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 25 09:28:13 compute-1 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 25 09:28:13 compute-1 kernel: cfg80211: failed to load regulatory.db
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.1936] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.1956] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2364] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2366] audit: op="connection-add" uuid="77774a5d-c72f-4d4f-bd7e-8f750f9aeaa6" name="br-ex-br" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2379] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2380] audit: op="connection-add" uuid="721d5ec4-9d46-42e0-9b73-2c17e8a15905" name="br-ex-port" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2389] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2391] audit: op="connection-add" uuid="06300436-554c-4e9d-ac5c-99a436d52629" name="eth1-port" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2400] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2401] audit: op="connection-add" uuid="31a190a5-79f3-4786-b235-c66353ca8bfa" name="vlan20-port" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2410] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2411] audit: op="connection-add" uuid="524c55fb-ce55-4aa7-922b-e9b658532366" name="vlan21-port" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2421] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2422] audit: op="connection-add" uuid="500208f4-de85-4058-8802-c6a1b7700f12" name="vlan22-port" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2430] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2432] audit: op="connection-add" uuid="845dc2b8-8ceb-4b06-9e77-94ecdaf545b7" name="vlan23-port" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2448] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.method,ipv6.may-fail,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.routes" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2462] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2464] audit: op="connection-add" uuid="0f7f268e-d144-47aa-ae32-f9ec70610bab" name="br-ex-if" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2485] audit: op="connection-update" uuid="0e60f29c-c89d-5595-9135-fa0fd01bf23b" name="ci-private-network" args="ovs-external-ids.data,connection.slave-type,connection.port-type,connection.controller,connection.timestamp,connection.master,ipv4.method,ipv4.routes,ipv4.dns,ipv4.never-default,ipv4.addresses,ipv4.routing-rules,ovs-interface.type,ipv6.method,ipv6.addr-gen-mode,ipv6.routing-rules,ipv6.dns,ipv6.routes,ipv6.addresses" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2498] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2501] audit: op="connection-add" uuid="71914e0f-e508-496a-b81a-3ed5d6e7c8bc" name="vlan20-if" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2513] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2515] audit: op="connection-add" uuid="ff8d303d-85a5-41ed-a0b1-29fdab9396d6" name="vlan21-if" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2527] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2528] audit: op="connection-add" uuid="eb0e7c85-f8f8-41cf-8f98-1bed087a0016" name="vlan22-if" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2541] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2543] audit: op="connection-add" uuid="80190b62-53bd-4f4c-b559-b84a940edfee" name="vlan23-if" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2552] audit: op="connection-delete" uuid="9932a87b-87bf-3422-b00e-c134c1ad07fd" name="Wired connection 1" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2561] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2569] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2573] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (77774a5d-c72f-4d4f-bd7e-8f750f9aeaa6)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2574] audit: op="connection-activate" uuid="77774a5d-c72f-4d4f-bd7e-8f750f9aeaa6" name="br-ex-br" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2576] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2581] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2585] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (721d5ec4-9d46-42e0-9b73-2c17e8a15905)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2587] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2591] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2595] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (06300436-554c-4e9d-ac5c-99a436d52629)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2597] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2602] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2605] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (31a190a5-79f3-4786-b235-c66353ca8bfa)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2607] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2612] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2616] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (524c55fb-ce55-4aa7-922b-e9b658532366)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2617] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2622] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2626] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (500208f4-de85-4058-8802-c6a1b7700f12)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2627] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2633] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2636] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (845dc2b8-8ceb-4b06-9e77-94ecdaf545b7)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2637] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2640] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2642] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2647] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2651] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2655] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (0f7f268e-d144-47aa-ae32-f9ec70610bab)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2656] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2660] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2661] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2663] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2664] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2673] device (eth1): disconnecting for new activation request.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2705] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2709] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2711] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2712] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2716] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2720] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2724] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (71914e0f-e508-496a-b81a-3ed5d6e7c8bc)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2724] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2727] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2729] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2730] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2732] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2736] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2740] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (ff8d303d-85a5-41ed-a0b1-29fdab9396d6)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2740] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2743] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2744] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2745] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2747] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2750] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2754] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (eb0e7c85-f8f8-41cf-8f98-1bed087a0016)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2755] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2757] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2758] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2760] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2763] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2769] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2772] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (80190b62-53bd-4f4c-b559-b84a940edfee)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2773] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2775] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2776] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2777] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2778] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2788] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.method,ipv6.may-fail,ipv6.addr-gen-mode,ipv6.routes" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2790] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2792] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2793] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2799] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2801] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2804] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 kernel: ovs-system: entered promiscuous mode
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2820] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2826] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2830] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 kernel: Timeout policy base is empty
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2837] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2840] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2842] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2846] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 systemd-udevd[51642]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2849] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2857] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2860] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2864] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2868] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2870] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2872] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2876] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2880] dhcp4 (eth0): canceled DHCP transaction
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2881] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2881] dhcp4 (eth0): state changed no lease
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2882] dhcp6 (eth0): canceled DHCP transaction
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2883] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2883] dhcp6 (eth0): state changed no lease
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2888] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2899] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2902] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51636 uid=0 result="fail" reason="Device is not activated"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2905] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2913] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 25 09:28:14 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2963] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2966] dhcp4 (eth0): state changed new lease, address=192.168.26.77
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2971] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2989] device (eth1): disconnecting for new activation request.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2989] audit: op="connection-activate" uuid="0e60f29c-c89d-5595-9135-fa0fd01bf23b" name="ci-private-network" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.2990] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3079] device (eth1): Activation: starting connection 'ci-private-network' (0e60f29c-c89d-5595-9135-fa0fd01bf23b)
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3082] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3093] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3095] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3099] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3102] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3105] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3106] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3107] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51636 uid=0 result="success"
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3107] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3108] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3109] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3110] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3112] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3116] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 kernel: br-ex: entered promiscuous mode
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3118] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3121] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3123] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3126] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3129] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3133] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3135] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3137] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3139] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3141] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3143] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3147] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3151] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 kernel: vlan22: entered promiscuous mode
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3223] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3229] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3237] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3238] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3241] device (eth1): Activation: successful, device activated.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3255] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3258] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 25 09:28:14 compute-1 systemd-udevd[51640]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3263] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3329] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3337] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 kernel: vlan21: entered promiscuous mode
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3372] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3374] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3381] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 09:28:14 compute-1 kernel: vlan23: entered promiscuous mode
Nov 25 09:28:14 compute-1 systemd-udevd[51641]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3465] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3474] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3486] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3488] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3494] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 09:28:14 compute-1 kernel: vlan20: entered promiscuous mode
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3512] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3528] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3546] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3548] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3556] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3661] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3670] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3683] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3686] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-1 NetworkManager[48856]: <info>  [1764062894.3694] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 09:28:15 compute-1 NetworkManager[48856]: <info>  [1764062895.4559] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51636 uid=0 result="success"
Nov 25 09:28:15 compute-1 NetworkManager[48856]: <info>  [1764062895.5582] checkpoint[0x5595301b9950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 25 09:28:15 compute-1 NetworkManager[48856]: <info>  [1764062895.5585] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51636 uid=0 result="success"
Nov 25 09:28:15 compute-1 NetworkManager[48856]: <info>  [1764062895.6662] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51636 uid=0 result="success"
Nov 25 09:28:15 compute-1 NetworkManager[48856]: <info>  [1764062895.6672] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51636 uid=0 result="success"
Nov 25 09:28:15 compute-1 NetworkManager[48856]: <info>  [1764062895.8244] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51636 uid=0 result="success"
Nov 25 09:28:15 compute-1 NetworkManager[48856]: <info>  [1764062895.9399] checkpoint[0x5595301b9a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 25 09:28:15 compute-1 NetworkManager[48856]: <info>  [1764062895.9404] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51636 uid=0 result="success"
Nov 25 09:28:16 compute-1 NetworkManager[48856]: <info>  [1764062896.1741] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51636 uid=0 result="success"
Nov 25 09:28:16 compute-1 NetworkManager[48856]: <info>  [1764062896.1759] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51636 uid=0 result="success"
Nov 25 09:28:16 compute-1 sudo[51989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgkwnnerlxzjiqkblmeoirodmhhberuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062895.857751-846-248319428114983/AnsiballZ_async_status.py'
Nov 25 09:28:16 compute-1 sudo[51989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:16 compute-1 NetworkManager[48856]: <info>  [1764062896.3364] audit: op="networking-control" arg="global-dns-configuration" pid=51636 uid=0 result="success"
Nov 25 09:28:16 compute-1 NetworkManager[48856]: <info>  [1764062896.3377] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf)
Nov 25 09:28:16 compute-1 NetworkManager[48856]: <info>  [1764062896.3383] audit: op="networking-control" arg="global-dns-configuration" pid=51636 uid=0 result="success"
Nov 25 09:28:16 compute-1 NetworkManager[48856]: <info>  [1764062896.3438] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51636 uid=0 result="success"
Nov 25 09:28:16 compute-1 NetworkManager[48856]: <info>  [1764062896.4629] checkpoint[0x5595301b9af0]: destroy /org/freedesktop/NetworkManager/Checkpoint/3
Nov 25 09:28:16 compute-1 NetworkManager[48856]: <info>  [1764062896.4642] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51636 uid=0 result="success"
Nov 25 09:28:16 compute-1 python3.9[51991]: ansible-ansible.legacy.async_status Invoked with jid=j56206101451.51630 mode=status _async_dir=/root/.ansible_async
Nov 25 09:28:16 compute-1 sudo[51989]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:16 compute-1 ansible-async_wrapper.py[51634]: Module complete (51634)
Nov 25 09:28:17 compute-1 ansible-async_wrapper.py[51633]: Done in kid B.
Nov 25 09:28:19 compute-1 sudo[52093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmnbopemmzqebfpfxoajhxrngbvnaepa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062895.857751-846-248319428114983/AnsiballZ_async_status.py'
Nov 25 09:28:19 compute-1 sudo[52093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:19 compute-1 python3.9[52095]: ansible-ansible.legacy.async_status Invoked with jid=j56206101451.51630 mode=status _async_dir=/root/.ansible_async
Nov 25 09:28:19 compute-1 sudo[52093]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:20 compute-1 sudo[52193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwxxaitekeckobctvcnekaiwwrqrsdnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062895.857751-846-248319428114983/AnsiballZ_async_status.py'
Nov 25 09:28:20 compute-1 sudo[52193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:20 compute-1 python3.9[52195]: ansible-ansible.legacy.async_status Invoked with jid=j56206101451.51630 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 09:28:20 compute-1 sudo[52193]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:20 compute-1 sudo[52345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrxiqzffhdtfnvnejpnucomnzcuidmme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062900.3800256-927-2394205468958/AnsiballZ_stat.py'
Nov 25 09:28:20 compute-1 sudo[52345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:20 compute-1 python3.9[52347]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:28:20 compute-1 sudo[52345]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:20 compute-1 sudo[52468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idhivwjyvyxoblllxdyosrsdgptdyqyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062900.3800256-927-2394205468958/AnsiballZ_copy.py'
Nov 25 09:28:20 compute-1 sudo[52468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:21 compute-1 python3.9[52470]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062900.3800256-927-2394205468958/.source.returncode _original_basename=.rzxl58vg follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:21 compute-1 sudo[52468]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:21 compute-1 sudo[52620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdohgbnqnqzeszluzqybztkqhaicljqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062901.333632-975-248116067114362/AnsiballZ_stat.py'
Nov 25 09:28:21 compute-1 sudo[52620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:21 compute-1 python3.9[52622]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:28:21 compute-1 sudo[52620]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:21 compute-1 sudo[52743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svrsvrqtaeayfsahhimhmgqhpbalihqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062901.333632-975-248116067114362/AnsiballZ_copy.py'
Nov 25 09:28:21 compute-1 sudo[52743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:22 compute-1 python3.9[52745]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062901.333632-975-248116067114362/.source.cfg _original_basename=.dxq0fzd6 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:22 compute-1 sudo[52743]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:22 compute-1 sudo[52895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxnsgsjrhwlgqscdyzsxlaxhbwixunge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062902.2192562-1020-182771989750211/AnsiballZ_systemd.py'
Nov 25 09:28:22 compute-1 sudo[52895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:22 compute-1 python3.9[52897]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:28:22 compute-1 systemd[1]: Reloading Network Manager...
Nov 25 09:28:22 compute-1 NetworkManager[48856]: <info>  [1764062902.7057] audit: op="reload" arg="0" pid=52901 uid=0 result="success"
Nov 25 09:28:22 compute-1 NetworkManager[48856]: <info>  [1764062902.7063] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 25 09:28:22 compute-1 NetworkManager[48856]: <info>  [1764062902.7064] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 09:28:22 compute-1 systemd[1]: Reloaded Network Manager.
Nov 25 09:28:22 compute-1 sudo[52895]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:23 compute-1 sshd-session[44861]: Connection closed by 192.168.122.30 port 41686
Nov 25 09:28:23 compute-1 sshd-session[44858]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:28:23 compute-1 systemd[1]: session-10.scope: Deactivated successfully.
Nov 25 09:28:23 compute-1 systemd[1]: session-10.scope: Consumed 35.076s CPU time.
Nov 25 09:28:23 compute-1 systemd-logind[746]: Session 10 logged out. Waiting for processes to exit.
Nov 25 09:28:23 compute-1 systemd-logind[746]: Removed session 10.
Nov 25 09:28:25 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 09:28:28 compute-1 sshd-session[52934]: Accepted publickey for zuul from 192.168.122.30 port 33528 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:28:28 compute-1 systemd-logind[746]: New session 11 of user zuul.
Nov 25 09:28:28 compute-1 systemd[1]: Started Session 11 of User zuul.
Nov 25 09:28:28 compute-1 sshd-session[52934]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:28:29 compute-1 python3.9[53087]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:28:29 compute-1 python3.9[53242]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:28:30 compute-1 python3.9[53435]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:28:31 compute-1 sshd-session[52937]: Connection closed by 192.168.122.30 port 33528
Nov 25 09:28:31 compute-1 sshd-session[52934]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:28:31 compute-1 systemd[1]: session-11.scope: Deactivated successfully.
Nov 25 09:28:31 compute-1 systemd[1]: session-11.scope: Consumed 1.554s CPU time.
Nov 25 09:28:31 compute-1 systemd-logind[746]: Session 11 logged out. Waiting for processes to exit.
Nov 25 09:28:31 compute-1 systemd-logind[746]: Removed session 11.
Nov 25 09:28:32 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:28:36 compute-1 sshd-session[53464]: Accepted publickey for zuul from 192.168.122.30 port 38048 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:28:36 compute-1 systemd-logind[746]: New session 12 of user zuul.
Nov 25 09:28:36 compute-1 systemd[1]: Started Session 12 of User zuul.
Nov 25 09:28:36 compute-1 sshd-session[53464]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:28:37 compute-1 python3.9[53617]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:28:37 compute-1 python3.9[53771]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:28:38 compute-1 sudo[53925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsqdunddgsgoszxjxbzxyxrurtvkmljv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062918.1756535-81-82816969975028/AnsiballZ_setup.py'
Nov 25 09:28:38 compute-1 sudo[53925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:38 compute-1 python3.9[53927]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:28:38 compute-1 sudo[53925]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:39 compute-1 sudo[54010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzuhyyuzxxmmwduzigzhqpphnhmsfodq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062918.1756535-81-82816969975028/AnsiballZ_dnf.py'
Nov 25 09:28:39 compute-1 sudo[54010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:39 compute-1 python3.9[54012]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:28:40 compute-1 sudo[54010]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:40 compute-1 sudo[54163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzmzjcumjlmpeozfmibwnajnuzeoyjps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062920.4301908-117-171710541791173/AnsiballZ_setup.py'
Nov 25 09:28:40 compute-1 sudo[54163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:40 compute-1 python3.9[54165]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:28:41 compute-1 sudo[54163]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:41 compute-1 sudo[54358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxtdhxvkjeuqrglwnwxkwmddntvsulwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062921.4274578-150-54253479707646/AnsiballZ_file.py'
Nov 25 09:28:41 compute-1 sudo[54358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:41 compute-1 python3.9[54360]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:41 compute-1 sudo[54358]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:42 compute-1 sudo[54511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eemfsfajenunxcffleejiwfumjtxniem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062922.0240445-174-263094402727201/AnsiballZ_command.py'
Nov 25 09:28:42 compute-1 sudo[54511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:42 compute-1 python3.9[54513]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:28:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat3963683588-merged.mount: Deactivated successfully.
Nov 25 09:28:42 compute-1 podman[54514]: 2025-11-25 09:28:42.522980791 +0000 UTC m=+0.022791709 system refresh
Nov 25 09:28:42 compute-1 sudo[54511]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:43 compute-1 sudo[54672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biwhtsbmdugpbnbujdelcwailybxlqgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062922.6794934-198-152763332290939/AnsiballZ_stat.py'
Nov 25 09:28:43 compute-1 sudo[54672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:43 compute-1 python3.9[54674]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:28:43 compute-1 sudo[54672]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:43 compute-1 sudo[54795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkqwmcxhpgnijbegntdjaozcjbjgvucv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062922.6794934-198-152763332290939/AnsiballZ_copy.py'
Nov 25 09:28:43 compute-1 sudo[54795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:43 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:28:43 compute-1 python3.9[54797]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062922.6794934-198-152763332290939/.source.json follow=False _original_basename=podman_network_config.j2 checksum=2e7b0bad456f8c44de944e92d0f25f8b60fb7cdc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:43 compute-1 sudo[54795]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:44 compute-1 sudo[54947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvfgfvllhbfztrobrokrufiqxwawgepz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062923.8476772-243-266878337253178/AnsiballZ_stat.py'
Nov 25 09:28:44 compute-1 sudo[54947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:44 compute-1 python3.9[54949]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:28:44 compute-1 sudo[54947]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:44 compute-1 sudo[55070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqurzqeycldvwzvsgqpwxliusczssupx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062923.8476772-243-266878337253178/AnsiballZ_copy.py'
Nov 25 09:28:44 compute-1 sudo[55070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:44 compute-1 python3.9[55072]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764062923.8476772-243-266878337253178/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:28:44 compute-1 sudo[55070]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:45 compute-1 sudo[55222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auyvzqwlnyislxjmuakyxduzhkfjnxwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062924.8245003-291-172397643585210/AnsiballZ_ini_file.py'
Nov 25 09:28:45 compute-1 sudo[55222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:45 compute-1 python3.9[55224]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:28:45 compute-1 sudo[55222]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:45 compute-1 sudo[55375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcqhfvsyrpabljbuxzxntngfnwsfuivf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062925.3847058-291-66034075447283/AnsiballZ_ini_file.py'
Nov 25 09:28:45 compute-1 sudo[55375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:45 compute-1 python3.9[55377]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:28:45 compute-1 sudo[55375]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:45 compute-1 sudo[55527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-konkrtxsqizfupdugwbhkiaubrthllwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062925.8210294-291-250369732039523/AnsiballZ_ini_file.py'
Nov 25 09:28:45 compute-1 sudo[55527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:46 compute-1 python3.9[55529]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:28:46 compute-1 sudo[55527]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:46 compute-1 sudo[55679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktysdpywgwzduiyxyhtakckarjfwtbgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062926.2576234-291-154436433476128/AnsiballZ_ini_file.py'
Nov 25 09:28:46 compute-1 sudo[55679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:46 compute-1 python3.9[55681]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:28:46 compute-1 sudo[55679]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:47 compute-1 sudo[55831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qitrpdsitecrdeeydipqnkixrcaaoudg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062926.8750265-384-264920241561561/AnsiballZ_dnf.py'
Nov 25 09:28:47 compute-1 sudo[55831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:47 compute-1 python3.9[55833]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:28:48 compute-1 sudo[55831]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:48 compute-1 sudo[55984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eujqsuolqjnpkkejdiforttjfkvjicpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062928.7158995-417-94776386568178/AnsiballZ_setup.py'
Nov 25 09:28:48 compute-1 sudo[55984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:49 compute-1 python3.9[55986]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:28:49 compute-1 sudo[55984]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:49 compute-1 sudo[56138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arejxysdwejazhpeseuptcwilhpfeamn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062929.360295-441-75751356297948/AnsiballZ_stat.py'
Nov 25 09:28:49 compute-1 sudo[56138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:49 compute-1 python3.9[56140]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:28:49 compute-1 sudo[56138]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:50 compute-1 sudo[56290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvtuzprwqhcknhhrlpbitqbqtawpeswx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062929.912509-468-192143465417549/AnsiballZ_stat.py'
Nov 25 09:28:50 compute-1 sudo[56290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:50 compute-1 python3.9[56292]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:28:50 compute-1 sudo[56290]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:50 compute-1 sudo[56442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmsorkyvhqokcwnrsdcaoxlennkelhts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062930.5676308-498-183817289794389/AnsiballZ_command.py'
Nov 25 09:28:50 compute-1 sudo[56442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:50 compute-1 python3.9[56444]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:28:50 compute-1 sudo[56442]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:51 compute-1 sudo[56595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uymjjbgomcpbwoolusanasiszmxysefj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062931.2144816-528-1391753924046/AnsiballZ_service_facts.py'
Nov 25 09:28:51 compute-1 sudo[56595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:51 compute-1 python3.9[56597]: ansible-service_facts Invoked
Nov 25 09:28:51 compute-1 network[56614]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:28:51 compute-1 network[56615]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:28:51 compute-1 network[56616]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:28:53 compute-1 sudo[56595]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:54 compute-1 sudo[56899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckvoxhqeacrzvxuhthwazmpciggaxpqy ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764062934.522769-573-201545418305951/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764062934.522769-573-201545418305951/args'
Nov 25 09:28:54 compute-1 sudo[56899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:54 compute-1 sudo[56899]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:55 compute-1 sudo[57066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwbknygscyowtxskjwjnjvhvyalgkssl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062935.0525017-606-35638511183842/AnsiballZ_dnf.py'
Nov 25 09:28:55 compute-1 sudo[57066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:55 compute-1 python3.9[57068]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:28:56 compute-1 sudo[57066]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:57 compute-1 sudo[57219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lelonmgureoodriiiucvkwudthnxbwxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062936.962219-645-105387204845584/AnsiballZ_package_facts.py'
Nov 25 09:28:57 compute-1 sudo[57219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:57 compute-1 python3.9[57221]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 09:28:57 compute-1 sudo[57219]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:58 compute-1 sudo[57371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpogduwvejmxlhsahitbnvffsbhrdzrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062938.4284973-675-232448676628302/AnsiballZ_stat.py'
Nov 25 09:28:58 compute-1 sudo[57371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:58 compute-1 python3.9[57373]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:28:58 compute-1 sudo[57371]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:59 compute-1 sudo[57496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obbfoywnclwgsyrpkpwzmqzcusdycoby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062938.4284973-675-232448676628302/AnsiballZ_copy.py'
Nov 25 09:28:59 compute-1 sudo[57496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:59 compute-1 python3.9[57498]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062938.4284973-675-232448676628302/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:59 compute-1 sudo[57496]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:00 compute-1 sudo[57650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuefqgvzrbxcilszuqzwcdplhnrycwsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062939.9922893-720-7513691834040/AnsiballZ_stat.py'
Nov 25 09:29:00 compute-1 sudo[57650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:00 compute-1 python3.9[57652]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:00 compute-1 sudo[57650]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:00 compute-1 sudo[57775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwtzuggtvpwwrmepeknwysdhlygriuvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062939.9922893-720-7513691834040/AnsiballZ_copy.py'
Nov 25 09:29:00 compute-1 sudo[57775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:00 compute-1 python3.9[57777]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062939.9922893-720-7513691834040/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:00 compute-1 sudo[57775]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:02 compute-1 sudo[57929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usxijeqjnyjjjnshfsgmceuwxtvstfsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062941.7554107-785-89765156707456/AnsiballZ_lineinfile.py'
Nov 25 09:29:02 compute-1 sudo[57929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:02 compute-1 python3.9[57931]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:02 compute-1 sudo[57929]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:03 compute-1 sudo[58083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvcfrouzkrykllwbtdhxligrbuafocez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062943.2567232-828-54030717263123/AnsiballZ_setup.py'
Nov 25 09:29:03 compute-1 sudo[58083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:03 compute-1 python3.9[58085]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:29:03 compute-1 sudo[58083]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:04 compute-1 sudo[58167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gllrewjjydmygthtyypjfzzkvoharqvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062943.2567232-828-54030717263123/AnsiballZ_systemd.py'
Nov 25 09:29:04 compute-1 sudo[58167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:04 compute-1 python3.9[58169]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:29:04 compute-1 sudo[58167]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:05 compute-1 sudo[58321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrawenfwcxmkcwzatymrhzimetiqrnmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062945.2927034-877-131438221692263/AnsiballZ_setup.py'
Nov 25 09:29:05 compute-1 sudo[58321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:05 compute-1 python3.9[58323]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:29:05 compute-1 sudo[58321]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:06 compute-1 sudo[58405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kawusyliajxlhuyqiilzoqzyksmmcfgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062945.2927034-877-131438221692263/AnsiballZ_systemd.py'
Nov 25 09:29:06 compute-1 sudo[58405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:06 compute-1 python3.9[58407]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:29:06 compute-1 chronyd[754]: chronyd exiting
Nov 25 09:29:06 compute-1 systemd[1]: Stopping NTP client/server...
Nov 25 09:29:06 compute-1 systemd[1]: chronyd.service: Deactivated successfully.
Nov 25 09:29:06 compute-1 systemd[1]: Stopped NTP client/server.
Nov 25 09:29:06 compute-1 systemd[1]: Starting NTP client/server...
Nov 25 09:29:06 compute-1 chronyd[58415]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 09:29:06 compute-1 chronyd[58415]: Frequency -9.904 +/- 0.563 ppm read from /var/lib/chrony/drift
Nov 25 09:29:06 compute-1 chronyd[58415]: Loaded seccomp filter (level 2)
Nov 25 09:29:06 compute-1 systemd[1]: Started NTP client/server.
Nov 25 09:29:06 compute-1 sudo[58405]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:06 compute-1 sshd-session[53467]: Connection closed by 192.168.122.30 port 38048
Nov 25 09:29:06 compute-1 sshd-session[53464]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:29:06 compute-1 systemd-logind[746]: Session 12 logged out. Waiting for processes to exit.
Nov 25 09:29:06 compute-1 systemd[1]: session-12.scope: Deactivated successfully.
Nov 25 09:29:06 compute-1 systemd[1]: session-12.scope: Consumed 17.768s CPU time.
Nov 25 09:29:06 compute-1 systemd-logind[746]: Removed session 12.
Nov 25 09:29:12 compute-1 sshd-session[58441]: Accepted publickey for zuul from 192.168.122.30 port 32944 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:29:12 compute-1 systemd-logind[746]: New session 13 of user zuul.
Nov 25 09:29:12 compute-1 systemd[1]: Started Session 13 of User zuul.
Nov 25 09:29:12 compute-1 sshd-session[58441]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:29:12 compute-1 sudo[58594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crhdssqyiqoqgiyhlbuywqzczwacviei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062952.3575985-27-261954535361254/AnsiballZ_file.py'
Nov 25 09:29:12 compute-1 sudo[58594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:12 compute-1 python3.9[58596]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:12 compute-1 sudo[58594]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:13 compute-1 sudo[58746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oobechhoumvtkzgegcuyydizwtuzwdiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062953.0023048-63-260933276793903/AnsiballZ_stat.py'
Nov 25 09:29:13 compute-1 sudo[58746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:13 compute-1 python3.9[58748]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:13 compute-1 sudo[58746]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:13 compute-1 sudo[58869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waewbfwdfdhxfjdsinwtgpuirsvmqmee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062953.0023048-63-260933276793903/AnsiballZ_copy.py'
Nov 25 09:29:13 compute-1 sudo[58869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:14 compute-1 python3.9[58871]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062953.0023048-63-260933276793903/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:14 compute-1 sudo[58869]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:14 compute-1 sshd-session[58444]: Connection closed by 192.168.122.30 port 32944
Nov 25 09:29:14 compute-1 sshd-session[58441]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:29:14 compute-1 systemd[1]: session-13.scope: Deactivated successfully.
Nov 25 09:29:14 compute-1 systemd[1]: session-13.scope: Consumed 1.088s CPU time.
Nov 25 09:29:14 compute-1 systemd-logind[746]: Session 13 logged out. Waiting for processes to exit.
Nov 25 09:29:14 compute-1 systemd-logind[746]: Removed session 13.
Nov 25 09:29:20 compute-1 sshd-session[58896]: Accepted publickey for zuul from 192.168.122.30 port 33816 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:29:20 compute-1 systemd-logind[746]: New session 14 of user zuul.
Nov 25 09:29:20 compute-1 systemd[1]: Started Session 14 of User zuul.
Nov 25 09:29:20 compute-1 sshd-session[58896]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:29:21 compute-1 python3.9[59049]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:29:21 compute-1 sudo[59203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oatcdbonrmxwpuiobffzpnrurlrxknur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062961.5865173-60-279736921961126/AnsiballZ_file.py'
Nov 25 09:29:21 compute-1 sudo[59203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:22 compute-1 python3.9[59205]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:22 compute-1 sudo[59203]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:22 compute-1 sudo[59378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgkvlkctrrftddgceguuwfwtgctoxnai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062962.1856654-84-18455516059880/AnsiballZ_stat.py'
Nov 25 09:29:22 compute-1 sudo[59378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:22 compute-1 python3.9[59380]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:22 compute-1 sudo[59378]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:23 compute-1 sudo[59501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnxboblzqsinzajjveolgqltriltsyms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062962.1856654-84-18455516059880/AnsiballZ_copy.py'
Nov 25 09:29:23 compute-1 sudo[59501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:23 compute-1 python3.9[59503]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764062962.1856654-84-18455516059880/.source.json _original_basename=.ofjwt5tw follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:23 compute-1 sudo[59501]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:23 compute-1 sudo[59653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hptzxeqfbodvsqqwwmgtuxjozangnssw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062963.6383476-153-147154663047599/AnsiballZ_stat.py'
Nov 25 09:29:23 compute-1 sudo[59653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:23 compute-1 python3.9[59655]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:23 compute-1 sudo[59653]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:24 compute-1 sudo[59776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikhbmkkxcbaerbpqgurpgrrjbdzdhkoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062963.6383476-153-147154663047599/AnsiballZ_copy.py'
Nov 25 09:29:24 compute-1 sudo[59776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:24 compute-1 python3.9[59778]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062963.6383476-153-147154663047599/.source _original_basename=.0vxy0n5c follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:24 compute-1 sudo[59776]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:24 compute-1 sudo[59928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjbfatoxqziukfzkjsentnetdrpdhusv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062964.534604-201-225760957933001/AnsiballZ_file.py'
Nov 25 09:29:24 compute-1 sudo[59928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:24 compute-1 python3.9[59930]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:29:24 compute-1 sudo[59928]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:25 compute-1 sudo[60080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcbangbfvrotzvccmdmnapjhkjaxbpbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062965.0216935-225-25851342957947/AnsiballZ_stat.py'
Nov 25 09:29:25 compute-1 sudo[60080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:25 compute-1 python3.9[60082]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:25 compute-1 sudo[60080]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:25 compute-1 sudo[60203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjbgzwxqrqskjmcwhicpqwayjsbwerhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062965.0216935-225-25851342957947/AnsiballZ_copy.py'
Nov 25 09:29:25 compute-1 sudo[60203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:25 compute-1 python3.9[60205]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764062965.0216935-225-25851342957947/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:29:25 compute-1 sudo[60203]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:26 compute-1 sudo[60355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvtgdxkrlwwxikevlqqvjlqnmtraggay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062965.847567-225-178598282658560/AnsiballZ_stat.py'
Nov 25 09:29:26 compute-1 sudo[60355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:26 compute-1 python3.9[60357]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:26 compute-1 sudo[60355]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:26 compute-1 sudo[60478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jthwnyrczaqjjswxcoqyycwlveznaouy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062965.847567-225-178598282658560/AnsiballZ_copy.py'
Nov 25 09:29:26 compute-1 sudo[60478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:26 compute-1 python3.9[60480]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764062965.847567-225-178598282658560/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:29:26 compute-1 sudo[60478]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:26 compute-1 sudo[60630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boyicscbaybjzwxrsrtdzaprftapdune ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062966.7067695-312-182591982891607/AnsiballZ_file.py'
Nov 25 09:29:26 compute-1 sudo[60630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:27 compute-1 python3.9[60632]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:27 compute-1 sudo[60630]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:27 compute-1 sudo[60782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjqnywgjddlstggirmqscbupibykphgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062967.1801722-336-19137009900479/AnsiballZ_stat.py'
Nov 25 09:29:27 compute-1 sudo[60782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:27 compute-1 python3.9[60784]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:27 compute-1 sudo[60782]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:27 compute-1 sudo[60905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqnqnkscbfbvhlkkflpuszpdshqaesdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062967.1801722-336-19137009900479/AnsiballZ_copy.py'
Nov 25 09:29:27 compute-1 sudo[60905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:27 compute-1 python3.9[60907]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062967.1801722-336-19137009900479/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:27 compute-1 sudo[60905]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:28 compute-1 sudo[61057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gykdbkmxfbqnhhkfxgonlxyfjrlgtmrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062968.050866-381-117284063594626/AnsiballZ_stat.py'
Nov 25 09:29:28 compute-1 sudo[61057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:28 compute-1 python3.9[61059]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:28 compute-1 sudo[61057]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:28 compute-1 sudo[61180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjuibavblhkrjqxyefkahytmuegntjdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062968.050866-381-117284063594626/AnsiballZ_copy.py'
Nov 25 09:29:28 compute-1 sudo[61180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:28 compute-1 python3.9[61182]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062968.050866-381-117284063594626/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:28 compute-1 sudo[61180]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:29 compute-1 sudo[61332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfxvkmzyvvsjquwkbtiyqnfrzbguxqsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062968.9103124-426-255140272600198/AnsiballZ_systemd.py'
Nov 25 09:29:29 compute-1 sudo[61332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:29 compute-1 python3.9[61334]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:29:29 compute-1 systemd[1]: Reloading.
Nov 25 09:29:29 compute-1 systemd-rc-local-generator[61353]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:29:29 compute-1 systemd-sysv-generator[61358]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:29:29 compute-1 systemd[1]: Starting dnf makecache...
Nov 25 09:29:29 compute-1 systemd[1]: Reloading.
Nov 25 09:29:29 compute-1 systemd-rc-local-generator[61393]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:29:29 compute-1 systemd-sysv-generator[61396]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:29:29 compute-1 dnf[61371]: Failed determining last makecache time.
Nov 25 09:29:29 compute-1 systemd[1]: Starting EDPM Container Shutdown...
Nov 25 09:29:29 compute-1 systemd[1]: Finished EDPM Container Shutdown.
Nov 25 09:29:30 compute-1 sudo[61332]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:30 compute-1 dnf[61371]: delorean-openstack-barbican-42b4c41831408a8e323  20 kB/s | 3.0 kB     00:00
Nov 25 09:29:30 compute-1 dnf[61371]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7  21 kB/s | 3.0 kB     00:00
Nov 25 09:29:30 compute-1 sudo[61564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xctezrtyvhxjyesnavlrycmnkxgimcmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062970.1474903-450-46904150813530/AnsiballZ_stat.py'
Nov 25 09:29:30 compute-1 sudo[61564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:30 compute-1 dnf[61371]: delorean-openstack-cinder-1c00d6490d88e436f26ef  21 kB/s | 3.0 kB     00:00
Nov 25 09:29:30 compute-1 python3.9[61566]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:30 compute-1 sudo[61564]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:30 compute-1 dnf[61371]: delorean-python-stevedore-c4acc5639fd2329372142  21 kB/s | 3.0 kB     00:00
Nov 25 09:29:30 compute-1 dnf[61371]: delorean-python-observabilityclient-2f31846d73c  23 kB/s | 3.0 kB     00:00
Nov 25 09:29:30 compute-1 sudo[61690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjjzfhvsflcjwiraknqnvpmkubizdokp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062970.1474903-450-46904150813530/AnsiballZ_copy.py'
Nov 25 09:29:30 compute-1 sudo[61690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:30 compute-1 dnf[61371]: delorean-os-net-config-bbae2ed8a159b0435a473f38  22 kB/s | 3.0 kB     00:00
Nov 25 09:29:30 compute-1 python3.9[61692]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062970.1474903-450-46904150813530/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:30 compute-1 sudo[61690]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:30 compute-1 dnf[61371]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6  22 kB/s | 3.0 kB     00:00
Nov 25 09:29:31 compute-1 dnf[61371]: delorean-python-designate-tests-tempest-347fdbc  23 kB/s | 3.0 kB     00:00
Nov 25 09:29:31 compute-1 sudo[61845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoatxodoitgpyaykbgvkrgetnukzrlza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062971.0032058-495-10669370358155/AnsiballZ_stat.py'
Nov 25 09:29:31 compute-1 sudo[61845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:31 compute-1 dnf[61371]: delorean-openstack-glance-1fd12c29b339f30fe823e  23 kB/s | 3.0 kB     00:00
Nov 25 09:29:31 compute-1 python3.9[61847]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:31 compute-1 sudo[61845]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:31 compute-1 dnf[61371]: delorean-openstack-keystone-e4b40af0ae3698fbbbb  23 kB/s | 3.0 kB     00:00
Nov 25 09:29:31 compute-1 dnf[61371]: delorean-openstack-manila-3c01b7181572c95dac462  23 kB/s | 3.0 kB     00:00
Nov 25 09:29:31 compute-1 sudo[61971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvzvvyqoipavltayejktwohjrifnqtas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062971.0032058-495-10669370358155/AnsiballZ_copy.py'
Nov 25 09:29:31 compute-1 sudo[61971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:31 compute-1 python3.9[61973]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062971.0032058-495-10669370358155/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:31 compute-1 sudo[61971]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:32 compute-1 sudo[62123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuwhhonouzrqudjybjsmlsdxdxvvofqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062971.8468983-540-165556585541407/AnsiballZ_systemd.py'
Nov 25 09:29:32 compute-1 sudo[62123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:32 compute-1 python3.9[62125]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:29:32 compute-1 systemd[1]: Reloading.
Nov 25 09:29:32 compute-1 systemd-rc-local-generator[62148]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:29:32 compute-1 systemd-sysv-generator[62151]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:29:32 compute-1 systemd[1]: Reloading.
Nov 25 09:29:32 compute-1 systemd-sysv-generator[62186]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:29:32 compute-1 systemd-rc-local-generator[62182]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:29:32 compute-1 systemd[1]: Starting Create netns directory...
Nov 25 09:29:32 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 09:29:32 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 09:29:32 compute-1 systemd[1]: Finished Create netns directory.
Nov 25 09:29:32 compute-1 sudo[62123]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:32 compute-1 dnf[61371]: delorean-python-whitebox-neutron-tests-tempest- 2.3 kB/s | 3.0 kB     00:01
Nov 25 09:29:32 compute-1 dnf[61371]: delorean-openstack-octavia-ba397f07a7331190208c  21 kB/s | 3.0 kB     00:00
Nov 25 09:29:33 compute-1 dnf[61371]: delorean-openstack-watcher-c014f81a8647287f6dcc  22 kB/s | 3.0 kB     00:00
Nov 25 09:29:33 compute-1 dnf[61371]: delorean-python-tcib-1124124ec06aadbac34f0d340b  20 kB/s | 3.0 kB     00:00
Nov 25 09:29:33 compute-1 python3.9[62355]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:29:33 compute-1 network[62373]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:29:33 compute-1 network[62374]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:29:33 compute-1 network[62375]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:29:33 compute-1 dnf[61371]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158  23 kB/s | 3.0 kB     00:00
Nov 25 09:29:33 compute-1 dnf[61371]: delorean-openstack-swift-dc98a8463506ac520c469a  23 kB/s | 3.0 kB     00:00
Nov 25 09:29:33 compute-1 dnf[61371]: delorean-python-tempestconf-8515371b7cceebd4282  22 kB/s | 3.0 kB     00:00
Nov 25 09:29:33 compute-1 dnf[61371]: delorean-openstack-heat-ui-013accbfd179753bc3f0  23 kB/s | 3.0 kB     00:00
Nov 25 09:29:33 compute-1 dnf[61371]: CentOS Stream 9 - BaseOS                         40 kB/s | 5.4 kB     00:00
Nov 25 09:29:34 compute-1 dnf[61371]: CentOS Stream 9 - AppStream                      30 kB/s | 6.1 kB     00:00
Nov 25 09:29:34 compute-1 dnf[61371]: CentOS Stream 9 - CRB                            39 kB/s | 5.3 kB     00:00
Nov 25 09:29:34 compute-1 dnf[61371]: CentOS Stream 9 - Extras packages                45 kB/s | 8.3 kB     00:00
Nov 25 09:29:34 compute-1 dnf[61371]: dlrn-antelope-testing                            22 kB/s | 3.0 kB     00:00
Nov 25 09:29:34 compute-1 dnf[61371]: dlrn-antelope-build-deps                         23 kB/s | 3.0 kB     00:00
Nov 25 09:29:34 compute-1 dnf[61371]: centos9-rabbitmq                                 47 kB/s | 3.0 kB     00:00
Nov 25 09:29:34 compute-1 dnf[61371]: centos9-storage                                 148 kB/s | 3.0 kB     00:00
Nov 25 09:29:34 compute-1 dnf[61371]: centos9-opstools                                 62 kB/s | 3.0 kB     00:00
Nov 25 09:29:35 compute-1 dnf[61371]: NFV SIG OpenvSwitch                             147 kB/s | 3.0 kB     00:00
Nov 25 09:29:35 compute-1 dnf[61371]: repo-setup-centos-appstream                     214 kB/s | 4.4 kB     00:00
Nov 25 09:29:35 compute-1 dnf[61371]: repo-setup-centos-baseos                        169 kB/s | 3.9 kB     00:00
Nov 25 09:29:35 compute-1 dnf[61371]: repo-setup-centos-highavailability               82 kB/s | 3.9 kB     00:00
Nov 25 09:29:35 compute-1 dnf[61371]: repo-setup-centos-powertools                     93 kB/s | 4.3 kB     00:00
Nov 25 09:29:35 compute-1 dnf[61371]: Extra Packages for Enterprise Linux 9 - x86_64  157 kB/s |  31 kB     00:00
Nov 25 09:29:35 compute-1 sudo[62661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uensqnjkdubeyrfpyjhndenrdwopizkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062975.4918325-588-62063903119988/AnsiballZ_systemd.py'
Nov 25 09:29:35 compute-1 sudo[62661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:35 compute-1 python3.9[62663]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:29:35 compute-1 systemd[1]: Reloading.
Nov 25 09:29:35 compute-1 dnf[61371]: Metadata cache created.
Nov 25 09:29:36 compute-1 systemd-rc-local-generator[62686]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:29:36 compute-1 systemd-sysv-generator[62689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:29:36 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 25 09:29:36 compute-1 systemd[1]: Finished dnf makecache.
Nov 25 09:29:36 compute-1 systemd[1]: dnf-makecache.service: Consumed 1.326s CPU time.
Nov 25 09:29:36 compute-1 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 25 09:29:36 compute-1 iptables.init[62703]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 25 09:29:36 compute-1 iptables.init[62703]: iptables: Flushing firewall rules: [  OK  ]
Nov 25 09:29:36 compute-1 systemd[1]: iptables.service: Deactivated successfully.
Nov 25 09:29:36 compute-1 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 25 09:29:36 compute-1 sudo[62661]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:36 compute-1 sudo[62897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqpxrhupqyujykhivukgtlewsunqfcrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062976.5388906-588-120015966980890/AnsiballZ_systemd.py'
Nov 25 09:29:36 compute-1 sudo[62897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:36 compute-1 python3.9[62899]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:29:37 compute-1 sudo[62897]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:37 compute-1 sudo[63051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwhyqzjripkxzeugtgzgsjjxvtnoucvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062977.4951155-636-15178875778285/AnsiballZ_systemd.py'
Nov 25 09:29:37 compute-1 sudo[63051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:37 compute-1 python3.9[63053]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:29:37 compute-1 systemd[1]: Reloading.
Nov 25 09:29:38 compute-1 systemd-sysv-generator[63086]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:29:38 compute-1 systemd-rc-local-generator[63081]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:29:38 compute-1 systemd[1]: Starting Netfilter Tables...
Nov 25 09:29:38 compute-1 systemd[1]: Finished Netfilter Tables.
Nov 25 09:29:38 compute-1 sudo[63051]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:38 compute-1 sudo[63243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pysxvnialszmcxkqbapvntdvyfuuyewg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062978.4355645-660-176540766717798/AnsiballZ_command.py'
Nov 25 09:29:38 compute-1 sudo[63243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:38 compute-1 python3.9[63245]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:29:38 compute-1 sudo[63243]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:39 compute-1 sudo[63396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxzwyjiuklghswbwygvffqraecspadyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062979.482025-702-173908793785180/AnsiballZ_stat.py'
Nov 25 09:29:39 compute-1 sudo[63396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:39 compute-1 python3.9[63398]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:39 compute-1 sudo[63396]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:40 compute-1 sudo[63521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irvxaxectuecprtpsbkmnryhksiktnxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062979.482025-702-173908793785180/AnsiballZ_copy.py'
Nov 25 09:29:40 compute-1 sudo[63521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:40 compute-1 python3.9[63523]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062979.482025-702-173908793785180/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:40 compute-1 sudo[63521]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:40 compute-1 sudo[63674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iynorghgteeykxeftukclnfhjrccclex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062980.499256-747-240628168054988/AnsiballZ_systemd.py'
Nov 25 09:29:40 compute-1 sudo[63674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:40 compute-1 python3.9[63676]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:29:40 compute-1 systemd[1]: Reloading OpenSSH server daemon...
Nov 25 09:29:40 compute-1 sshd[964]: Received SIGHUP; restarting.
Nov 25 09:29:40 compute-1 systemd[1]: Reloaded OpenSSH server daemon.
Nov 25 09:29:40 compute-1 sshd[964]: Server listening on 0.0.0.0 port 22.
Nov 25 09:29:40 compute-1 sshd[964]: Server listening on :: port 22.
Nov 25 09:29:40 compute-1 sudo[63674]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:41 compute-1 sudo[63830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqldagdvuxsamlefejglhnupfbracfqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062981.194514-771-246302104287976/AnsiballZ_file.py'
Nov 25 09:29:41 compute-1 sudo[63830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:41 compute-1 python3.9[63832]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:41 compute-1 sudo[63830]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:41 compute-1 sudo[63982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftaopozyosgruspsihoquodyiqpmluru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062981.7243109-795-235639646192255/AnsiballZ_stat.py'
Nov 25 09:29:41 compute-1 sudo[63982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:42 compute-1 python3.9[63984]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:42 compute-1 sudo[63982]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:42 compute-1 sudo[64105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrpsvvlluwkphjmgfeowxfstaxdttccw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062981.7243109-795-235639646192255/AnsiballZ_copy.py'
Nov 25 09:29:42 compute-1 sudo[64105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:42 compute-1 python3.9[64107]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062981.7243109-795-235639646192255/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:42 compute-1 sudo[64105]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:43 compute-1 sudo[64257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leoxiuhocxiteqyrjwiwbiougxkurwaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062982.8906913-849-156485743500158/AnsiballZ_timezone.py'
Nov 25 09:29:43 compute-1 sudo[64257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:43 compute-1 python3.9[64259]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 09:29:43 compute-1 systemd[1]: Starting Time & Date Service...
Nov 25 09:29:43 compute-1 systemd[1]: Started Time & Date Service.
Nov 25 09:29:43 compute-1 sudo[64257]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:43 compute-1 sudo[64413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcjsmszjkfvhpngjojeixqmsfrgiurui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062983.6952865-876-56823265604604/AnsiballZ_file.py'
Nov 25 09:29:43 compute-1 sudo[64413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:44 compute-1 python3.9[64415]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:44 compute-1 sudo[64413]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:44 compute-1 sudo[64565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqgmflndjtetujerktebyqllwarchthg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062984.241727-900-93404278810782/AnsiballZ_stat.py'
Nov 25 09:29:44 compute-1 sudo[64565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:44 compute-1 python3.9[64567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:44 compute-1 sudo[64565]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:44 compute-1 sudo[64688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzwbjaxynvjhulxyvdyjycwubedgwwgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062984.241727-900-93404278810782/AnsiballZ_copy.py'
Nov 25 09:29:44 compute-1 sudo[64688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:44 compute-1 python3.9[64690]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062984.241727-900-93404278810782/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:44 compute-1 sudo[64688]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:45 compute-1 sudo[64840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isazwcewneklffzvfmciciemdbqrysns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062985.1248393-945-70196708736403/AnsiballZ_stat.py'
Nov 25 09:29:45 compute-1 sudo[64840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:45 compute-1 python3.9[64842]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:45 compute-1 sudo[64840]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:45 compute-1 sudo[64963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exrxkwmomczvjxomlpdbuiwffoylczwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062985.1248393-945-70196708736403/AnsiballZ_copy.py'
Nov 25 09:29:45 compute-1 sudo[64963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:45 compute-1 python3.9[64965]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062985.1248393-945-70196708736403/.source.yaml _original_basename=.6bs1vf8r follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:45 compute-1 sudo[64963]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:46 compute-1 sudo[65115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfqhvlleukmmwaxdcfzbimkmiortnepf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062986.0873423-990-38443355804661/AnsiballZ_stat.py'
Nov 25 09:29:46 compute-1 sudo[65115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:46 compute-1 python3.9[65117]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:46 compute-1 sudo[65115]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:46 compute-1 sudo[65238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inczlyehltuywksfsgfcfvyyrkwxyerg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062986.0873423-990-38443355804661/AnsiballZ_copy.py'
Nov 25 09:29:46 compute-1 sudo[65238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:46 compute-1 python3.9[65240]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062986.0873423-990-38443355804661/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:46 compute-1 sudo[65238]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:47 compute-1 sudo[65390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgulvtsnmrlrbliycqfejhjnlhfxsmnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062986.9993498-1035-133977142985243/AnsiballZ_command.py'
Nov 25 09:29:47 compute-1 sudo[65390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:47 compute-1 python3.9[65392]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:29:47 compute-1 sudo[65390]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:47 compute-1 sudo[65543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqfqbetfjzgeqyhcqtgutguligliwhib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062987.4998083-1059-216941461503187/AnsiballZ_command.py'
Nov 25 09:29:47 compute-1 sudo[65543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:47 compute-1 python3.9[65545]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:29:47 compute-1 sudo[65543]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:48 compute-1 sudo[65696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzoiigfmzmjcldwbytmvnurvjvfzstrc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764062987.9936745-1083-73770032163770/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 09:29:48 compute-1 sudo[65696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:48 compute-1 python3[65698]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 09:29:48 compute-1 sudo[65696]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:48 compute-1 sudo[65848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdbbjgrclsetbveqnqfrfolfmfjrrpdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062988.6134467-1107-214995193344796/AnsiballZ_stat.py'
Nov 25 09:29:48 compute-1 sudo[65848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:48 compute-1 python3.9[65850]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:48 compute-1 sudo[65848]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:49 compute-1 sudo[65971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anwajfeeupfbducizomcpxkwkcvwapos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062988.6134467-1107-214995193344796/AnsiballZ_copy.py'
Nov 25 09:29:49 compute-1 sudo[65971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:49 compute-1 python3.9[65973]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062988.6134467-1107-214995193344796/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:49 compute-1 sudo[65971]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:49 compute-1 sudo[66123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmzrweknnynjxdwvymiqhvhkkblysyzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062989.5434082-1152-40059453272872/AnsiballZ_stat.py'
Nov 25 09:29:49 compute-1 sudo[66123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:49 compute-1 python3.9[66125]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:49 compute-1 sudo[66123]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:50 compute-1 sudo[66246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oimobnmfylfikvfzizrwjzfxclrjiayx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062989.5434082-1152-40059453272872/AnsiballZ_copy.py'
Nov 25 09:29:50 compute-1 sudo[66246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:50 compute-1 python3.9[66248]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062989.5434082-1152-40059453272872/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:50 compute-1 sudo[66246]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:50 compute-1 sudo[66398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpkxhycgpynwmcojzisezqelwuwrzlsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062990.4826703-1197-258444495213324/AnsiballZ_stat.py'
Nov 25 09:29:50 compute-1 sudo[66398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:50 compute-1 python3.9[66400]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:50 compute-1 sudo[66398]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:51 compute-1 sudo[66521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enhvifzqmwrntwrwjohcsyxmkwmctgct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062990.4826703-1197-258444495213324/AnsiballZ_copy.py'
Nov 25 09:29:51 compute-1 sudo[66521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:51 compute-1 python3.9[66523]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062990.4826703-1197-258444495213324/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:51 compute-1 sudo[66521]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:51 compute-1 sudo[66673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sserwmcplgfxcqzjlpthpjezktwcmduj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062991.4192033-1242-70548727317458/AnsiballZ_stat.py'
Nov 25 09:29:51 compute-1 sudo[66673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:51 compute-1 python3.9[66675]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:51 compute-1 sudo[66673]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:52 compute-1 sudo[66796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwundwudvjgjpksdjqbigbbvwelucocz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062991.4192033-1242-70548727317458/AnsiballZ_copy.py'
Nov 25 09:29:52 compute-1 sudo[66796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:52 compute-1 python3.9[66798]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062991.4192033-1242-70548727317458/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:52 compute-1 sudo[66796]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:52 compute-1 sudo[66948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mshbojufwqkitibszgwwezptlpcujnmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062992.3554971-1287-138601962948474/AnsiballZ_stat.py'
Nov 25 09:29:52 compute-1 sudo[66948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:52 compute-1 python3.9[66950]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:52 compute-1 sudo[66948]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:52 compute-1 sudo[67071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcwjmnyyqmuqxqfekowntrwtgyqzyccu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062992.3554971-1287-138601962948474/AnsiballZ_copy.py'
Nov 25 09:29:52 compute-1 sudo[67071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:53 compute-1 python3.9[67073]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062992.3554971-1287-138601962948474/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:53 compute-1 sudo[67071]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:53 compute-1 sudo[67223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwnhnqdovycknxyylkbobbcjcvzaczde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062993.383733-1332-206646572995182/AnsiballZ_file.py'
Nov 25 09:29:53 compute-1 sudo[67223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:53 compute-1 python3.9[67225]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:53 compute-1 sudo[67223]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:54 compute-1 sudo[67375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzcglwwecsnmpjzgyrwhdxmzdphyhwku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062993.9171867-1356-153658027972623/AnsiballZ_command.py'
Nov 25 09:29:54 compute-1 sudo[67375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:54 compute-1 python3.9[67377]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:29:54 compute-1 sudo[67375]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:54 compute-1 sudo[67534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odhrifmgahxeugdxtxjndjauousqricm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062994.4905198-1380-221114960100013/AnsiballZ_blockinfile.py'
Nov 25 09:29:54 compute-1 sudo[67534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:54 compute-1 python3.9[67536]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:55 compute-1 sudo[67534]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:55 compute-1 sudo[67687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fchezgrqdgwxegzvquwgrvxayrnasmdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062995.2428408-1407-173007665971757/AnsiballZ_file.py'
Nov 25 09:29:55 compute-1 sudo[67687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:55 compute-1 python3.9[67689]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:55 compute-1 sudo[67687]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:55 compute-1 sudo[67839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abwfozzjjtplbtmoplllgsqurpdtjydd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062995.7097633-1407-215764621143166/AnsiballZ_file.py'
Nov 25 09:29:55 compute-1 sudo[67839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:56 compute-1 python3.9[67841]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:56 compute-1 sudo[67839]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:56 compute-1 sudo[67991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdtqoidbmscrypybnaeaqrbuhtubafcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062996.34424-1452-124654032952107/AnsiballZ_mount.py'
Nov 25 09:29:56 compute-1 sudo[67991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:56 compute-1 python3.9[67993]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 09:29:56 compute-1 sudo[67991]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:57 compute-1 sudo[68144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozjpvymyogzvycaovdcazokdwqbgpyzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062996.9873996-1452-247618141936541/AnsiballZ_mount.py'
Nov 25 09:29:57 compute-1 sudo[68144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:57 compute-1 python3.9[68146]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 09:29:57 compute-1 sudo[68144]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:57 compute-1 sshd-session[58899]: Connection closed by 192.168.122.30 port 33816
Nov 25 09:29:57 compute-1 sshd-session[58896]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:29:57 compute-1 systemd[1]: session-14.scope: Deactivated successfully.
Nov 25 09:29:57 compute-1 systemd[1]: session-14.scope: Consumed 23.989s CPU time.
Nov 25 09:29:57 compute-1 systemd-logind[746]: Session 14 logged out. Waiting for processes to exit.
Nov 25 09:29:57 compute-1 systemd-logind[746]: Removed session 14.
Nov 25 09:30:03 compute-1 sshd-session[68172]: Accepted publickey for zuul from 192.168.122.30 port 37718 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:30:03 compute-1 systemd-logind[746]: New session 15 of user zuul.
Nov 25 09:30:03 compute-1 systemd[1]: Started Session 15 of User zuul.
Nov 25 09:30:03 compute-1 sshd-session[68172]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:30:03 compute-1 sudo[68325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkmvvntnoiqwvhcrgtvnhtffraooxtyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063003.4507856-19-135488775195440/AnsiballZ_tempfile.py'
Nov 25 09:30:03 compute-1 sudo[68325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:04 compute-1 python3.9[68327]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 09:30:04 compute-1 sudo[68325]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:04 compute-1 sudo[68477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acgzevsiniyshuyvfujnsnfbfdobfqdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063004.2776709-55-29329792864464/AnsiballZ_stat.py'
Nov 25 09:30:04 compute-1 sudo[68477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:04 compute-1 python3.9[68479]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:30:04 compute-1 sudo[68477]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:05 compute-1 sudo[68629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-refverwhbhyxlvdrlidcotjdqxdqsxlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063004.9368005-85-268247168919378/AnsiballZ_setup.py'
Nov 25 09:30:05 compute-1 sudo[68629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:05 compute-1 python3.9[68631]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:30:05 compute-1 sudo[68629]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:06 compute-1 sudo[68781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zipogersypcflfkgwshfqvugegsrewhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063005.8404303-110-242437153702631/AnsiballZ_blockinfile.py'
Nov 25 09:30:06 compute-1 sudo[68781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:06 compute-1 python3.9[68783]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBYH+LEkGk38QCoX+uCPb3zHk7+XCeEWV22HpalqUrYF70U5Myra5/E2/v2kioqGNh5TR9q+A7kNO0JU78Ai+6UBv5aJlbEptu33E5t38qiAv3rpyypYwQ8PdWBl7OCeDcqz0EyYAZEw7rLbCWimqRhYsSXuUND+rRboiuI8DEX229oAgnRmIjyPJTTdKGiM3FTdl9YiSbYNyBykzJ8AugCfme4+hmds+8LJloh2aJjRJCs3/GvxdaGJcjBWAqN3Aurg+gPekKe4fwmOir2+KpqBDQE9YMfiBvraaCMGrDXkAjPdsycsvGMsWckhOgEW5qpTIt+ca5kcrK43ChAH5R/PpHlHnEYqw2o26BLmqIejfmXKRSxmH/Fq9Ldj3DMLJr4NTFBfJAl8wqsUKs6/0jngwOCYz6NLs7GgGZLMYv6wbRVgUpCc4ikQ8f1EDmXTdtqxef+QdmLTgWY1qCqe5lL8BcDDCjOTLJ6bbLUAdubY1z4vb6SFVcamH4SkSCFxs=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGHCQQOw3EbtZ2XAFA2gGrEnb7MaEAFwIJjyskket7pD
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFP8ctNKDLqIcODtgMol02WD/NgFM5ja/WeN20e07JH/Mz/Ge/v2/ybsY8LOtiyzixlX47XT8hWBR4IBwS2uvfM=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/QqShzRf5Fxs30q3tSf7IhrByfRVQwrs4CVW/gcd2Sdcp7tmVXVNFpJc8XlgTmWxcSLbFtAv0HgJOJ3p6/+g394nChAIaM55uhK/RLFqBZ/byiFqEjvN2LkEWuUVdvbZM808GhONJnWQtg70nn99jeLP34zkSD7gsU7cykxF7K7VyeBfeSiuOcyTjXvVfXr9TZxCZMrsb4eWFZAZ4QERXITlLcZthwc0kd17QWJWLo8Ssv4Qu0DtCHtqHO07s7Nz/CpSs0TX5jVM+C+2rAMn+aAZ4J25X8di4ABF5tO27d+ePazRlU5PWjb8n6kdy1B/cjHgvajXOoUPb5RjyVx2IgULBXaWsIRO23wp8YqiE1OdTly2+Nr5KiTPvR5yqq9C6aBNzS7YyUQc6Rf2RBAaLQbA36NJLGvPUWC7iYVtWdGoTfcTmzqkD2s3hzZl+zU2xNS0IpwByJsOJVIijtGFh1Y45uujq0WUJNPf1ayrY2Z/TV+iO/1iah3JArjyNiq8=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMPD1sScOy6Aiq5PZkl3KepHqJnvlMIZW4R0DzMl4b3w
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO/iVb5vehoW1eqrk4jdR3j25kacpoWkaPIq4PHAndTN4lXAEwSRab7iUqXkAAaYvUnrCJ86WUoAYGkII0QB5wA=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSE1VMIuB9MiQ17/QHDRAbfwrBNbTb+wZH1rCqeQvAxcHqZYp6TugJnyWX+nah5oDk8vz2PCIUW2lm/tVgP4Y2JHeaN2uMNgVnz1WtD6lCQORMYi1R+KpBgiAQoZAjAyC5Ugx5LWbDvrwtpt0zi2DEgCr2Zao5DG5UAaIcs7/Rj2LRx3hgA4jJ9xJKHVi5bUZfjIlWxLzVXVYT+dvUNrZoiVMBcaUMZRpU4tJ/76mE2jbqsfHEPFwHZ6ljoIegFbzNYoKYMCPK+DeOs/73xD4r/nzeQOK3IQzMOEEVaUYvceA+EPX4M+MrKfkNrJwf35qTOFJpb368gJsebA9uXjzPfzX/uh1atxLv5SihEzC5fHdiZ3BZ3wLEy0C7lvXyRBZdQx+anEYQnDepM/ThOT4YR2BNSCdRS2OpzeSJDS+o5CS++zCqWM4yI3lufZm8O8JqPEblV518196TSyMlAOzPbjEjrUaYGdljY5S2OzKA4PBJW4hW4RyBtjcZWJBpNlM=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBoG9NSSqw98oHfgpW8u+wJYHDhMiOjIhpCElLIROYdO
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHFL1noqwoCl3YzxWiRl0GcsDxYERT1o8e2TvLqUkxWuv8xj0oHuq7+GhcKu7HpiCls71ko7MDcOX4zteG544k4=
                                             create=True mode=0644 path=/tmp/ansible.bzxm3qw0 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:30:06 compute-1 sudo[68781]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:06 compute-1 sudo[68933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aznjwmuinojbbgddzqfmkjbtupzlsofb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063006.4662356-134-278675025456003/AnsiballZ_command.py'
Nov 25 09:30:06 compute-1 sudo[68933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:06 compute-1 python3.9[68935]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.bzxm3qw0' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:30:06 compute-1 sudo[68933]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:07 compute-1 sudo[69087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouvkfuxfqfypsfgvbdnalmzkowatraju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063007.0469108-158-2914408285393/AnsiballZ_file.py'
Nov 25 09:30:07 compute-1 sudo[69087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:07 compute-1 python3.9[69089]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.bzxm3qw0 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:30:07 compute-1 sudo[69087]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:07 compute-1 sshd-session[68175]: Connection closed by 192.168.122.30 port 37718
Nov 25 09:30:07 compute-1 sshd-session[68172]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:30:07 compute-1 systemd[1]: session-15.scope: Deactivated successfully.
Nov 25 09:30:07 compute-1 systemd[1]: session-15.scope: Consumed 2.249s CPU time.
Nov 25 09:30:07 compute-1 systemd-logind[746]: Session 15 logged out. Waiting for processes to exit.
Nov 25 09:30:07 compute-1 systemd-logind[746]: Removed session 15.
Nov 25 09:30:13 compute-1 sshd-session[69114]: Accepted publickey for zuul from 192.168.122.30 port 34708 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:30:13 compute-1 systemd-logind[746]: New session 16 of user zuul.
Nov 25 09:30:13 compute-1 systemd[1]: Started Session 16 of User zuul.
Nov 25 09:30:13 compute-1 sshd-session[69114]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:30:13 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 09:30:14 compute-1 python3.9[69270]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:30:14 compute-1 sudo[69424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrogcufzfspqhirvttwpylinbujtmcwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063014.4292934-57-15011025483969/AnsiballZ_systemd.py'
Nov 25 09:30:14 compute-1 sudo[69424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:15 compute-1 python3.9[69426]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 09:30:15 compute-1 sudo[69424]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:15 compute-1 sudo[69578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrivyfpqnwhbdmuffufwkvpalbsghrrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063015.2580612-81-128115158435395/AnsiballZ_systemd.py'
Nov 25 09:30:15 compute-1 sudo[69578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:15 compute-1 python3.9[69580]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:30:15 compute-1 sudo[69578]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:16 compute-1 sudo[69731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snvkztmtrjxguvzdzhpozpdiwwicpuku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063015.9284918-108-67525626000062/AnsiballZ_command.py'
Nov 25 09:30:16 compute-1 sudo[69731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:16 compute-1 python3.9[69733]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:30:16 compute-1 sudo[69731]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:16 compute-1 sudo[69884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eijcgbzififbvqsndtfosmlcbjuctkbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063016.5554037-132-46749014006073/AnsiballZ_stat.py'
Nov 25 09:30:16 compute-1 sudo[69884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:17 compute-1 python3.9[69886]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:30:17 compute-1 sudo[69884]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:17 compute-1 sudo[70038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qotmcotqdherivftngkgrylmkjftxkwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063017.1540136-156-106569338610396/AnsiballZ_command.py'
Nov 25 09:30:17 compute-1 sudo[70038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:17 compute-1 python3.9[70040]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:30:17 compute-1 sudo[70038]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:17 compute-1 sudo[70193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpdiyvrkaizjsndurytjaflmvhqvmdmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063017.6468945-180-98882058923836/AnsiballZ_file.py'
Nov 25 09:30:17 compute-1 sudo[70193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:18 compute-1 python3.9[70195]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:30:18 compute-1 sudo[70193]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:18 compute-1 sshd-session[69117]: Connection closed by 192.168.122.30 port 34708
Nov 25 09:30:18 compute-1 sshd-session[69114]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:30:18 compute-1 systemd[1]: session-16.scope: Deactivated successfully.
Nov 25 09:30:18 compute-1 systemd[1]: session-16.scope: Consumed 3.110s CPU time.
Nov 25 09:30:18 compute-1 systemd-logind[746]: Session 16 logged out. Waiting for processes to exit.
Nov 25 09:30:18 compute-1 systemd-logind[746]: Removed session 16.
Nov 25 09:30:23 compute-1 sshd-session[70220]: Accepted publickey for zuul from 192.168.122.30 port 36966 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:30:23 compute-1 systemd-logind[746]: New session 17 of user zuul.
Nov 25 09:30:23 compute-1 systemd[1]: Started Session 17 of User zuul.
Nov 25 09:30:23 compute-1 sshd-session[70220]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:30:24 compute-1 python3.9[70373]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:30:24 compute-1 sudo[70527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jagitnuoqcmynaqbvdsxzengjrvsanvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063024.5984414-63-202305581930600/AnsiballZ_setup.py'
Nov 25 09:30:24 compute-1 sudo[70527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:25 compute-1 python3.9[70529]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:30:25 compute-1 sudo[70527]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:25 compute-1 sudo[70611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hykimygaeohjmlwvyvaqyligndtxhdds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063024.5984414-63-202305581930600/AnsiballZ_dnf.py'
Nov 25 09:30:25 compute-1 sudo[70611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:25 compute-1 python3.9[70613]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 09:30:26 compute-1 sudo[70611]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:27 compute-1 python3.9[70764]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:30:28 compute-1 python3.9[70915]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 09:30:28 compute-1 python3.9[71065]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:30:29 compute-1 python3.9[71215]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:30:29 compute-1 sshd-session[70223]: Connection closed by 192.168.122.30 port 36966
Nov 25 09:30:29 compute-1 sshd-session[70220]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:30:29 compute-1 systemd[1]: session-17.scope: Deactivated successfully.
Nov 25 09:30:29 compute-1 systemd[1]: session-17.scope: Consumed 4.252s CPU time.
Nov 25 09:30:29 compute-1 systemd-logind[746]: Session 17 logged out. Waiting for processes to exit.
Nov 25 09:30:29 compute-1 systemd-logind[746]: Removed session 17.
Nov 25 09:30:36 compute-1 sshd-session[71240]: Accepted publickey for zuul from 192.168.26.191 port 38726 ssh2: RSA SHA256:s7IOmVGBFERPpXYPL/Wxp3ltfNRkS78sM3fXgIDzVB4
Nov 25 09:30:36 compute-1 systemd-logind[746]: New session 18 of user zuul.
Nov 25 09:30:36 compute-1 systemd[1]: Started Session 18 of User zuul.
Nov 25 09:30:36 compute-1 sshd-session[71240]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:30:36 compute-1 sudo[71316]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olsmcjwagigsjqeivachjmkwhtcfsqrw ; /usr/bin/python3'
Nov 25 09:30:36 compute-1 sudo[71316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:36 compute-1 useradd[71320]: new group: name=ceph-admin, GID=42478
Nov 25 09:30:36 compute-1 useradd[71320]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 25 09:30:36 compute-1 sudo[71316]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:36 compute-1 sudo[71402]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwfpeojepijcfuazyeslpytavxuxuwtl ; /usr/bin/python3'
Nov 25 09:30:36 compute-1 sudo[71402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:36 compute-1 sudo[71402]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:37 compute-1 sudo[71475]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npueredeikktexkzsxfmltqakonmnoix ; /usr/bin/python3'
Nov 25 09:30:37 compute-1 sudo[71475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:37 compute-1 sudo[71475]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:37 compute-1 sudo[71525]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drgggbcjooqsrojxtkqmzfvczcwmkugj ; /usr/bin/python3'
Nov 25 09:30:37 compute-1 sudo[71525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:37 compute-1 sudo[71525]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:37 compute-1 sudo[71551]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfnxzskhiyfimbzivyqtticlfaiotcwj ; /usr/bin/python3'
Nov 25 09:30:37 compute-1 sudo[71551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:37 compute-1 sudo[71551]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:37 compute-1 sudo[71577]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgznqkpllxhhzkxhrwabtzwyzvtztbtv ; /usr/bin/python3'
Nov 25 09:30:37 compute-1 sudo[71577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:38 compute-1 sudo[71577]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:38 compute-1 sudo[71603]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihgwwhhnokmwrglwyrxdxaepqlimxorq ; /usr/bin/python3'
Nov 25 09:30:38 compute-1 sudo[71603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:38 compute-1 sudo[71603]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:38 compute-1 sudo[71681]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewibtrqccxsoumyjpustqiboykwvyufq ; /usr/bin/python3'
Nov 25 09:30:38 compute-1 sudo[71681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:38 compute-1 sudo[71681]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:38 compute-1 sudo[71754]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzafodseqrmqckjczrajlhuhnxmwllix ; /usr/bin/python3'
Nov 25 09:30:38 compute-1 sudo[71754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:39 compute-1 sudo[71754]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:39 compute-1 sudo[71856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykpgibwthwkgiacwapmysnyrnrfweozf ; /usr/bin/python3'
Nov 25 09:30:39 compute-1 sudo[71856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:39 compute-1 sudo[71856]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:39 compute-1 sudo[71929]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vocjozpyhfdboavhwvsdrkkuvrazmzku ; /usr/bin/python3'
Nov 25 09:30:39 compute-1 sudo[71929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:39 compute-1 sudo[71929]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:40 compute-1 sudo[71979]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyaoisdphbdrshiohvhljhvotfktzkms ; /usr/bin/python3'
Nov 25 09:30:40 compute-1 sudo[71979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:40 compute-1 python3[71981]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:30:40 compute-1 sudo[71979]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:41 compute-1 sudo[72070]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-takaqivcuynnifvdrhzoggajiykogiqt ; /usr/bin/python3'
Nov 25 09:30:41 compute-1 sudo[72070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:41 compute-1 python3[72072]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 09:30:42 compute-1 sudo[72070]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:42 compute-1 sudo[72097]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgzgdpafctyysdivgjuuqjqyfpxtijyt ; /usr/bin/python3'
Nov 25 09:30:42 compute-1 sudo[72097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:42 compute-1 python3[72099]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 09:30:42 compute-1 sudo[72097]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:42 compute-1 rsyslogd[963]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:30:42 compute-1 sudo[72124]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfhmoozoxfyuklwjpzgzuzlcpqukqpxy ; /usr/bin/python3'
Nov 25 09:30:42 compute-1 sudo[72124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:43 compute-1 python3[72126]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:30:43 compute-1 kernel: loop: module loaded
Nov 25 09:30:43 compute-1 kernel: loop3: detected capacity change from 0 to 41943040
Nov 25 09:30:43 compute-1 sudo[72124]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:43 compute-1 sudo[72159]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osxbapcdpbmqbtdelyjfpqnswqtmvkno ; /usr/bin/python3'
Nov 25 09:30:43 compute-1 sudo[72159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:43 compute-1 python3[72161]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:30:43 compute-1 lvm[72164]: PV /dev/loop3 not used.
Nov 25 09:30:43 compute-1 lvm[72173]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:30:43 compute-1 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 25 09:30:43 compute-1 lvm[72175]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 25 09:30:43 compute-1 sudo[72159]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:43 compute-1 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 25 09:30:43 compute-1 sudo[72251]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcynrsasryeagjumerjfrbowbbiwhiiy ; /usr/bin/python3'
Nov 25 09:30:43 compute-1 sudo[72251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:43 compute-1 python3[72253]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:30:43 compute-1 sudo[72251]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:43 compute-1 sudo[72324]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqigiplraaqizpqrptzpvdtbwotyhhjl ; /usr/bin/python3'
Nov 25 09:30:43 compute-1 sudo[72324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:44 compute-1 python3[72326]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063043.645383-37201-63692593856280/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:30:44 compute-1 sudo[72324]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:44 compute-1 sudo[72374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqladcqkhgcmisuecwnuvduwzqyfvlrz ; /usr/bin/python3'
Nov 25 09:30:44 compute-1 sudo[72374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:44 compute-1 python3[72376]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:30:44 compute-1 systemd[1]: Reloading.
Nov 25 09:30:44 compute-1 systemd-sysv-generator[72402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:30:44 compute-1 systemd-rc-local-generator[72398]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:30:44 compute-1 systemd[1]: Starting Ceph OSD losetup...
Nov 25 09:30:44 compute-1 bash[72416]: /dev/loop3: [64513]:4327758 (/var/lib/ceph-osd-0.img)
Nov 25 09:30:44 compute-1 systemd[1]: Finished Ceph OSD losetup.
Nov 25 09:30:44 compute-1 lvm[72417]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:30:44 compute-1 lvm[72417]: VG ceph_vg0 finished
Nov 25 09:30:44 compute-1 sudo[72374]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:46 compute-1 python3[72441]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:31:15 compute-1 chronyd[58415]: Selected source 142.202.190.19 (pool.ntp.org)
Nov 25 09:31:51 compute-1 sshd-session[72485]: Accepted publickey for ceph-admin from 192.168.122.100 port 45532 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:51 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 09:31:51 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 09:31:51 compute-1 systemd-logind[746]: New session 19 of user ceph-admin.
Nov 25 09:31:51 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 09:31:51 compute-1 systemd[1]: Starting User Manager for UID 42477...
Nov 25 09:31:51 compute-1 systemd[72489]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:51 compute-1 systemd[72489]: Queued start job for default target Main User Target.
Nov 25 09:31:51 compute-1 systemd[72489]: Created slice User Application Slice.
Nov 25 09:31:51 compute-1 systemd[72489]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 09:31:51 compute-1 systemd[72489]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 09:31:51 compute-1 systemd[72489]: Reached target Paths.
Nov 25 09:31:51 compute-1 systemd[72489]: Reached target Timers.
Nov 25 09:31:51 compute-1 systemd[72489]: Starting D-Bus User Message Bus Socket...
Nov 25 09:31:51 compute-1 systemd[72489]: Starting Create User's Volatile Files and Directories...
Nov 25 09:31:51 compute-1 systemd[72489]: Finished Create User's Volatile Files and Directories.
Nov 25 09:31:51 compute-1 systemd[72489]: Listening on D-Bus User Message Bus Socket.
Nov 25 09:31:51 compute-1 systemd[72489]: Reached target Sockets.
Nov 25 09:31:51 compute-1 systemd[72489]: Reached target Basic System.
Nov 25 09:31:51 compute-1 systemd[1]: Started User Manager for UID 42477.
Nov 25 09:31:51 compute-1 systemd[72489]: Reached target Main User Target.
Nov 25 09:31:51 compute-1 systemd[72489]: Startup finished in 84ms.
Nov 25 09:31:51 compute-1 systemd[1]: Started Session 19 of User ceph-admin.
Nov 25 09:31:51 compute-1 sshd-session[72485]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:51 compute-1 sshd-session[72503]: Accepted publickey for ceph-admin from 192.168.122.100 port 45534 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:51 compute-1 systemd-logind[746]: New session 21 of user ceph-admin.
Nov 25 09:31:51 compute-1 systemd[1]: Started Session 21 of User ceph-admin.
Nov 25 09:31:51 compute-1 sshd-session[72503]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:51 compute-1 sudo[72510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:51 compute-1 sudo[72510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:51 compute-1 sudo[72510]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:51 compute-1 sshd-session[72535]: Accepted publickey for ceph-admin from 192.168.122.100 port 45536 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:51 compute-1 systemd-logind[746]: New session 22 of user ceph-admin.
Nov 25 09:31:51 compute-1 systemd[1]: Started Session 22 of User ceph-admin.
Nov 25 09:31:51 compute-1 sshd-session[72535]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:51 compute-1 sudo[72539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Nov 25 09:31:51 compute-1 sudo[72539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:51 compute-1 sudo[72539]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:51 compute-1 sshd-session[72564]: Accepted publickey for ceph-admin from 192.168.122.100 port 45550 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:51 compute-1 systemd-logind[746]: New session 23 of user ceph-admin.
Nov 25 09:31:51 compute-1 systemd[1]: Started Session 23 of User ceph-admin.
Nov 25 09:31:51 compute-1 sshd-session[72564]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:52 compute-1 sudo[72568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Nov 25 09:31:52 compute-1 sudo[72568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:52 compute-1 sudo[72568]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:52 compute-1 sshd-session[72593]: Accepted publickey for ceph-admin from 192.168.122.100 port 48118 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:52 compute-1 systemd-logind[746]: New session 24 of user ceph-admin.
Nov 25 09:31:52 compute-1 systemd[1]: Started Session 24 of User ceph-admin.
Nov 25 09:31:52 compute-1 sshd-session[72593]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:52 compute-1 sudo[72597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:52 compute-1 sudo[72597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:52 compute-1 sudo[72597]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:52 compute-1 sshd-session[72622]: Accepted publickey for ceph-admin from 192.168.122.100 port 48128 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:52 compute-1 systemd-logind[746]: New session 25 of user ceph-admin.
Nov 25 09:31:52 compute-1 systemd[1]: Started Session 25 of User ceph-admin.
Nov 25 09:31:52 compute-1 sshd-session[72622]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:52 compute-1 sudo[72626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:52 compute-1 sudo[72626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:52 compute-1 sudo[72626]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:52 compute-1 sshd-session[72651]: Accepted publickey for ceph-admin from 192.168.122.100 port 48144 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:52 compute-1 systemd-logind[746]: New session 26 of user ceph-admin.
Nov 25 09:31:52 compute-1 systemd[1]: Started Session 26 of User ceph-admin.
Nov 25 09:31:52 compute-1 sshd-session[72651]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:52 compute-1 sudo[72655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Nov 25 09:31:52 compute-1 sudo[72655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:52 compute-1 sudo[72655]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:52 compute-1 sshd-session[72680]: Accepted publickey for ceph-admin from 192.168.122.100 port 48146 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:52 compute-1 systemd-logind[746]: New session 27 of user ceph-admin.
Nov 25 09:31:52 compute-1 systemd[1]: Started Session 27 of User ceph-admin.
Nov 25 09:31:52 compute-1 sshd-session[72680]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:53 compute-1 sudo[72684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:53 compute-1 sudo[72684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:53 compute-1 sudo[72684]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:53 compute-1 sshd-session[72709]: Accepted publickey for ceph-admin from 192.168.122.100 port 48160 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:53 compute-1 systemd-logind[746]: New session 28 of user ceph-admin.
Nov 25 09:31:53 compute-1 systemd[1]: Started Session 28 of User ceph-admin.
Nov 25 09:31:53 compute-1 sshd-session[72709]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:53 compute-1 sudo[72713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Nov 25 09:31:53 compute-1 sudo[72713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:53 compute-1 sudo[72713]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:53 compute-1 sshd-session[72738]: Accepted publickey for ceph-admin from 192.168.122.100 port 48176 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:53 compute-1 systemd-logind[746]: New session 29 of user ceph-admin.
Nov 25 09:31:53 compute-1 systemd[1]: Started Session 29 of User ceph-admin.
Nov 25 09:31:53 compute-1 sshd-session[72738]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:54 compute-1 sshd-session[72765]: Accepted publickey for ceph-admin from 192.168.122.100 port 48184 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:54 compute-1 systemd-logind[746]: New session 30 of user ceph-admin.
Nov 25 09:31:54 compute-1 systemd[1]: Started Session 30 of User ceph-admin.
Nov 25 09:31:54 compute-1 sshd-session[72765]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:54 compute-1 sudo[72769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Nov 25 09:31:54 compute-1 sudo[72769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:54 compute-1 sudo[72769]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:54 compute-1 sshd-session[72794]: Accepted publickey for ceph-admin from 192.168.122.100 port 48194 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:54 compute-1 systemd-logind[746]: New session 31 of user ceph-admin.
Nov 25 09:31:54 compute-1 systemd[1]: Started Session 31 of User ceph-admin.
Nov 25 09:31:54 compute-1 sshd-session[72794]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:54 compute-1 sudo[72798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Nov 25 09:31:54 compute-1 sudo[72798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:54 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:31:54 compute-1 sudo[72798]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:54 compute-1 sudo[72840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:54 compute-1 sudo[72840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:54 compute-1 sudo[72840]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:54 compute-1 sudo[72865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 25 09:31:54 compute-1 sudo[72865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:55 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:31:55 compute-1 sudo[72865]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:55 compute-1 sudo[72909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:55 compute-1 sudo[72909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:55 compute-1 sudo[72909]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:55 compute-1 sudo[72934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:31:55 compute-1 sudo[72934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:55 compute-1 sudo[72934]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:55 compute-1 sudo[72990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:55 compute-1 sudo[72990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:55 compute-1 sudo[72990]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:55 compute-1 sudo[73015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:31:55 compute-1 sudo[73015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:55 compute-1 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73050 (sysctl)
Nov 25 09:31:55 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:31:55 compute-1 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 25 09:31:55 compute-1 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 25 09:31:55 compute-1 sudo[73015]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:56 compute-1 sudo[73072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:56 compute-1 sudo[73072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:56 compute-1 sudo[73072]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:56 compute-1 sudo[73097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 25 09:31:56 compute-1 sudo[73097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:56 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:31:56 compute-1 sudo[73097]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:56 compute-1 sudo[73138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:56 compute-1 sudo[73138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:56 compute-1 sudo[73138]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:56 compute-1 sudo[73163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- inventory --format=json-pretty --filter-for-batch
Nov 25 09:31:56 compute-1 sudo[73163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat1735297338-lower\x2dmapped.mount: Deactivated successfully.
Nov 25 09:32:13 compute-1 podman[73218]: 2025-11-25 09:32:13.470849198 +0000 UTC m=+16.901840537 container create 4146bfff65a4b5ce4e84fa167773865f6e5de8781e58e15d32af490c97a59c80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 25 09:32:13 compute-1 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck2780977203-merged.mount: Deactivated successfully.
Nov 25 09:32:13 compute-1 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 25 09:32:13 compute-1 systemd[1]: Started libpod-conmon-4146bfff65a4b5ce4e84fa167773865f6e5de8781e58e15d32af490c97a59c80.scope.
Nov 25 09:32:13 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:13 compute-1 podman[73218]: 2025-11-25 09:32:13.458948274 +0000 UTC m=+16.889939614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:13 compute-1 podman[73218]: 2025-11-25 09:32:13.535863869 +0000 UTC m=+16.966855229 container init 4146bfff65a4b5ce4e84fa167773865f6e5de8781e58e15d32af490c97a59c80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 25 09:32:13 compute-1 podman[73218]: 2025-11-25 09:32:13.542750891 +0000 UTC m=+16.973742230 container start 4146bfff65a4b5ce4e84fa167773865f6e5de8781e58e15d32af490c97a59c80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:32:13 compute-1 podman[73218]: 2025-11-25 09:32:13.543926057 +0000 UTC m=+16.974917416 container attach 4146bfff65a4b5ce4e84fa167773865f6e5de8781e58e15d32af490c97a59c80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wright, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:32:13 compute-1 focused_wright[73269]: 167 167
Nov 25 09:32:13 compute-1 systemd[1]: libpod-4146bfff65a4b5ce4e84fa167773865f6e5de8781e58e15d32af490c97a59c80.scope: Deactivated successfully.
Nov 25 09:32:13 compute-1 podman[73218]: 2025-11-25 09:32:13.546959625 +0000 UTC m=+16.977950964 container died 4146bfff65a4b5ce4e84fa167773865f6e5de8781e58e15d32af490c97a59c80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:32:13 compute-1 systemd[1]: var-lib-containers-storage-overlay-c159857519b6413855480c94409862bd024df70761e0900e6c5f8ca5435bcc7c-merged.mount: Deactivated successfully.
Nov 25 09:32:13 compute-1 podman[73218]: 2025-11-25 09:32:13.566367478 +0000 UTC m=+16.997358818 container remove 4146bfff65a4b5ce4e84fa167773865f6e5de8781e58e15d32af490c97a59c80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 25 09:32:13 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:32:13 compute-1 systemd[1]: libpod-conmon-4146bfff65a4b5ce4e84fa167773865f6e5de8781e58e15d32af490c97a59c80.scope: Deactivated successfully.
Nov 25 09:32:13 compute-1 podman[73290]: 2025-11-25 09:32:13.700105863 +0000 UTC m=+0.035281977 container create 3cb6113e47d57f8692a0e88fb9f31e19897aa302fe67673903d0c0f27a9b8596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_edison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:32:13 compute-1 systemd[1]: Started libpod-conmon-3cb6113e47d57f8692a0e88fb9f31e19897aa302fe67673903d0c0f27a9b8596.scope.
Nov 25 09:32:13 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bcdcad82a25b2cfd76b4782bf7182f9fed46373fd187345e6109e565974403/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bcdcad82a25b2cfd76b4782bf7182f9fed46373fd187345e6109e565974403/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:13 compute-1 podman[73290]: 2025-11-25 09:32:13.754583964 +0000 UTC m=+0.089760078 container init 3cb6113e47d57f8692a0e88fb9f31e19897aa302fe67673903d0c0f27a9b8596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_edison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:32:13 compute-1 podman[73290]: 2025-11-25 09:32:13.761614376 +0000 UTC m=+0.096790489 container start 3cb6113e47d57f8692a0e88fb9f31e19897aa302fe67673903d0c0f27a9b8596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:32:13 compute-1 podman[73290]: 2025-11-25 09:32:13.763055343 +0000 UTC m=+0.098231477 container attach 3cb6113e47d57f8692a0e88fb9f31e19897aa302fe67673903d0c0f27a9b8596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 09:32:13 compute-1 podman[73290]: 2025-11-25 09:32:13.68358365 +0000 UTC m=+0.018759784 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:14 compute-1 keen_edison[73303]: [
Nov 25 09:32:14 compute-1 keen_edison[73303]:     {
Nov 25 09:32:14 compute-1 keen_edison[73303]:         "available": false,
Nov 25 09:32:14 compute-1 keen_edison[73303]:         "being_replaced": false,
Nov 25 09:32:14 compute-1 keen_edison[73303]:         "ceph_device_lvm": false,
Nov 25 09:32:14 compute-1 keen_edison[73303]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 09:32:14 compute-1 keen_edison[73303]:         "lsm_data": {},
Nov 25 09:32:14 compute-1 keen_edison[73303]:         "lvs": [],
Nov 25 09:32:14 compute-1 keen_edison[73303]:         "path": "/dev/sr0",
Nov 25 09:32:14 compute-1 keen_edison[73303]:         "rejected_reasons": [
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "Has a FileSystem",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "Insufficient space (<5GB)"
Nov 25 09:32:14 compute-1 keen_edison[73303]:         ],
Nov 25 09:32:14 compute-1 keen_edison[73303]:         "sys_api": {
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "actuators": null,
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "device_nodes": [
Nov 25 09:32:14 compute-1 keen_edison[73303]:                 "sr0"
Nov 25 09:32:14 compute-1 keen_edison[73303]:             ],
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "devname": "sr0",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "human_readable_size": "474.00 KB",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "id_bus": "ata",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "model": "QEMU DVD-ROM",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "nr_requests": "64",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "parent": "/dev/sr0",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "partitions": {},
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "path": "/dev/sr0",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "removable": "1",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "rev": "2.5+",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "ro": "0",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "rotational": "1",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "sas_address": "",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "sas_device_handle": "",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "scheduler_mode": "mq-deadline",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "sectors": 0,
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "sectorsize": "2048",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "size": 485376.0,
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "support_discard": "2048",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "type": "disk",
Nov 25 09:32:14 compute-1 keen_edison[73303]:             "vendor": "QEMU"
Nov 25 09:32:14 compute-1 keen_edison[73303]:         }
Nov 25 09:32:14 compute-1 keen_edison[73303]:     }
Nov 25 09:32:14 compute-1 keen_edison[73303]: ]
Nov 25 09:32:14 compute-1 systemd[1]: libpod-3cb6113e47d57f8692a0e88fb9f31e19897aa302fe67673903d0c0f27a9b8596.scope: Deactivated successfully.
Nov 25 09:32:14 compute-1 podman[74260]: 2025-11-25 09:32:14.42644317 +0000 UTC m=+0.021537919 container died 3cb6113e47d57f8692a0e88fb9f31e19897aa302fe67673903d0c0f27a9b8596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 09:32:14 compute-1 podman[74260]: 2025-11-25 09:32:14.445834592 +0000 UTC m=+0.040929322 container remove 3cb6113e47d57f8692a0e88fb9f31e19897aa302fe67673903d0c0f27a9b8596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_edison, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:32:14 compute-1 systemd[1]: libpod-conmon-3cb6113e47d57f8692a0e88fb9f31e19897aa302fe67673903d0c0f27a9b8596.scope: Deactivated successfully.
Nov 25 09:32:14 compute-1 sudo[73163]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:14 compute-1 sudo[74272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:32:14 compute-1 sudo[74272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:14 compute-1 sudo[74272]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:14 compute-1 sudo[74297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:32:14 compute-1 sudo[74297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:14 compute-1 sudo[74297]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:14 compute-1 sudo[74322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:32:14 compute-1 sudo[74322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:14 compute-1 sudo[74322]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:14 compute-1 sudo[74347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:32:14 compute-1 sudo[74347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:14 compute-1 sudo[74347]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:14 compute-1 sudo[74372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:32:14 compute-1 sudo[74372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:14 compute-1 sudo[74372]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:14 compute-1 sudo[74420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:32:14 compute-1 sudo[74420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:14 compute-1 sudo[74420]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:14 compute-1 sudo[74445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:32:14 compute-1 sudo[74445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:14 compute-1 sudo[74445]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:14 compute-1 sudo[74470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 25 09:32:14 compute-1 sudo[74470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:14 compute-1 sudo[74470]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:14 compute-1 sudo[74495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:32:14 compute-1 sudo[74495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:14 compute-1 sudo[74495]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:14 compute-1 sudo[74520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:32:14 compute-1 sudo[74520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:14 compute-1 sudo[74520]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:14 compute-1 sudo[74545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:32:14 compute-1 sudo[74545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:14 compute-1 sudo[74545]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:32:15 compute-1 sudo[74570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74570]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:32:15 compute-1 sudo[74595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74595]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:32:15 compute-1 sudo[74643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74643]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:32:15 compute-1 sudo[74668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74668]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:32:15 compute-1 sudo[74693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74693]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:32:15 compute-1 sudo[74718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74718]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:32:15 compute-1 sudo[74743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74743]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:32:15 compute-1 sudo[74768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74768]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:32:15 compute-1 sudo[74793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74793]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:32:15 compute-1 sudo[74818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74818]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:32:15 compute-1 sudo[74866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74866]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:32:15 compute-1 sudo[74891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74891]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 25 09:32:15 compute-1 sudo[74916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74916]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:32:15 compute-1 sudo[74941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74941]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:32:15 compute-1 sudo[74966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74966]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[74991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:32:15 compute-1 sudo[74991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[74991]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[75016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:32:15 compute-1 sudo[75016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[75016]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[75041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:32:15 compute-1 sudo[75041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[75041]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[75089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:32:15 compute-1 sudo[75089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[75089]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[75114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:32:15 compute-1 sudo[75114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[75114]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[75139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:32:15 compute-1 sudo[75139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[75139]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[75164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:15 compute-1 sudo[75164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:15 compute-1 sudo[75164]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:15 compute-1 sudo[75189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:32:15 compute-1 sudo[75189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:16 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:32:16 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:32:16 compute-1 podman[75248]: 2025-11-25 09:32:16.206137421 +0000 UTC m=+0.024800719 container create 99bc7c221f773e8a6b38b2d2ba58ae7592e61c3194fe020e145b94bff9b470d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lederberg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:16 compute-1 systemd[1]: Started libpod-conmon-99bc7c221f773e8a6b38b2d2ba58ae7592e61c3194fe020e145b94bff9b470d2.scope.
Nov 25 09:32:16 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:16 compute-1 podman[75248]: 2025-11-25 09:32:16.248107534 +0000 UTC m=+0.066770853 container init 99bc7c221f773e8a6b38b2d2ba58ae7592e61c3194fe020e145b94bff9b470d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:32:16 compute-1 podman[75248]: 2025-11-25 09:32:16.252735468 +0000 UTC m=+0.071398766 container start 99bc7c221f773e8a6b38b2d2ba58ae7592e61c3194fe020e145b94bff9b470d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lederberg, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:32:16 compute-1 podman[75248]: 2025-11-25 09:32:16.253977278 +0000 UTC m=+0.072640577 container attach 99bc7c221f773e8a6b38b2d2ba58ae7592e61c3194fe020e145b94bff9b470d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lederberg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:16 compute-1 condescending_lederberg[75261]: 167 167
Nov 25 09:32:16 compute-1 systemd[1]: libpod-99bc7c221f773e8a6b38b2d2ba58ae7592e61c3194fe020e145b94bff9b470d2.scope: Deactivated successfully.
Nov 25 09:32:16 compute-1 podman[75248]: 2025-11-25 09:32:16.256218554 +0000 UTC m=+0.074881851 container died 99bc7c221f773e8a6b38b2d2ba58ae7592e61c3194fe020e145b94bff9b470d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:32:16 compute-1 podman[75248]: 2025-11-25 09:32:16.272868278 +0000 UTC m=+0.091531576 container remove 99bc7c221f773e8a6b38b2d2ba58ae7592e61c3194fe020e145b94bff9b470d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lederberg, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:16 compute-1 podman[75248]: 2025-11-25 09:32:16.196022325 +0000 UTC m=+0.014685643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:16 compute-1 systemd[1]: libpod-conmon-99bc7c221f773e8a6b38b2d2ba58ae7592e61c3194fe020e145b94bff9b470d2.scope: Deactivated successfully.
Nov 25 09:32:16 compute-1 systemd[1]: Reloading.
Nov 25 09:32:16 compute-1 systemd-rc-local-generator[75296]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:32:16 compute-1 systemd-sysv-generator[75306]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:32:16 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:32:16 compute-1 systemd[1]: Reloading.
Nov 25 09:32:16 compute-1 systemd-sysv-generator[75337]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:32:16 compute-1 systemd-rc-local-generator[75334]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:32:16 compute-1 systemd[1]: Reached target All Ceph clusters and services.
Nov 25 09:32:16 compute-1 systemd[1]: Reloading.
Nov 25 09:32:16 compute-1 systemd-rc-local-generator[75370]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:32:16 compute-1 systemd-sysv-generator[75373]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:32:16 compute-1 systemd[1]: Reached target Ceph cluster af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:32:16 compute-1 systemd[1]: Reloading.
Nov 25 09:32:16 compute-1 systemd-rc-local-generator[75408]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:32:16 compute-1 systemd-sysv-generator[75411]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:32:17 compute-1 systemd[1]: Reloading.
Nov 25 09:32:17 compute-1 systemd-sysv-generator[75453]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:32:17 compute-1 systemd-rc-local-generator[75450]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:32:17 compute-1 systemd[1]: Created slice Slice /system/ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:32:17 compute-1 systemd[1]: Reached target System Time Set.
Nov 25 09:32:17 compute-1 systemd[1]: Reached target System Time Synchronized.
Nov 25 09:32:17 compute-1 systemd[1]: Starting Ceph crash.compute-1 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:32:17 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:32:17 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:32:17 compute-1 podman[75505]: 2025-11-25 09:32:17.487606194 +0000 UTC m=+0.049856250 container create 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:32:17 compute-1 podman[75505]: 2025-11-25 09:32:17.45429401 +0000 UTC m=+0.016544076 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f852f8bf755e80f5d8f621211c1ec2e2979ac0031cf4a2fdd829719184de3e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f852f8bf755e80f5d8f621211c1ec2e2979ac0031cf4a2fdd829719184de3e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f852f8bf755e80f5d8f621211c1ec2e2979ac0031cf4a2fdd829719184de3e8/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:17 compute-1 podman[75505]: 2025-11-25 09:32:17.694161508 +0000 UTC m=+0.256411594 container init 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 09:32:17 compute-1 podman[75505]: 2025-11-25 09:32:17.699092443 +0000 UTC m=+0.261342489 container start 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 25 09:32:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1[75516]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 25 09:32:17 compute-1 bash[75505]: 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4
Nov 25 09:32:17 compute-1 systemd[1]: Started Ceph crash.compute-1 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:32:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1[75516]: 2025-11-25T09:32:17.816+0000 7f698485d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 09:32:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1[75516]: 2025-11-25T09:32:17.816+0000 7f698485d640 -1 AuthRegistry(0x7f697c069490) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 09:32:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1[75516]: 2025-11-25T09:32:17.816+0000 7f698485d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 09:32:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1[75516]: 2025-11-25T09:32:17.816+0000 7f698485d640 -1 AuthRegistry(0x7f698485bff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 09:32:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1[75516]: 2025-11-25T09:32:17.818+0000 7f69825d2640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 25 09:32:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1[75516]: 2025-11-25T09:32:17.818+0000 7f698485d640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 25 09:32:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1[75516]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 25 09:32:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1[75516]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 25 09:32:17 compute-1 sudo[75189]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:17 compute-1 sudo[75533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:17 compute-1 sudo[75533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:17 compute-1 sudo[75533]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:17 compute-1 sudo[75558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:32:17 compute-1 sudo[75558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:18 compute-1 podman[75615]: 2025-11-25 09:32:18.214290264 +0000 UTC m=+0.028631601 container create 60bdd9539c1096dbccdb1ed63700696f4a6c1f9499a882de18e7979721716569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 25 09:32:18 compute-1 systemd[1]: Started libpod-conmon-60bdd9539c1096dbccdb1ed63700696f4a6c1f9499a882de18e7979721716569.scope.
Nov 25 09:32:18 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:18 compute-1 podman[75615]: 2025-11-25 09:32:18.27633619 +0000 UTC m=+0.090677527 container init 60bdd9539c1096dbccdb1ed63700696f4a6c1f9499a882de18e7979721716569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 25 09:32:18 compute-1 podman[75615]: 2025-11-25 09:32:18.28074522 +0000 UTC m=+0.095086557 container start 60bdd9539c1096dbccdb1ed63700696f4a6c1f9499a882de18e7979721716569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:32:18 compute-1 podman[75615]: 2025-11-25 09:32:18.281927109 +0000 UTC m=+0.096268445 container attach 60bdd9539c1096dbccdb1ed63700696f4a6c1f9499a882de18e7979721716569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 09:32:18 compute-1 gallant_yalow[75630]: 167 167
Nov 25 09:32:18 compute-1 systemd[1]: libpod-60bdd9539c1096dbccdb1ed63700696f4a6c1f9499a882de18e7979721716569.scope: Deactivated successfully.
Nov 25 09:32:18 compute-1 podman[75615]: 2025-11-25 09:32:18.285562611 +0000 UTC m=+0.099903949 container died 60bdd9539c1096dbccdb1ed63700696f4a6c1f9499a882de18e7979721716569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:32:18 compute-1 podman[75615]: 2025-11-25 09:32:18.202924711 +0000 UTC m=+0.017266047 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:18 compute-1 systemd[1]: var-lib-containers-storage-overlay-6fdf9656d1606ac34b1b3d00cd5861c3188692f46def0f97121bfa9d933bc9b8-merged.mount: Deactivated successfully.
Nov 25 09:32:18 compute-1 podman[75615]: 2025-11-25 09:32:18.307856295 +0000 UTC m=+0.122197632 container remove 60bdd9539c1096dbccdb1ed63700696f4a6c1f9499a882de18e7979721716569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Nov 25 09:32:18 compute-1 systemd[1]: libpod-conmon-60bdd9539c1096dbccdb1ed63700696f4a6c1f9499a882de18e7979721716569.scope: Deactivated successfully.
Nov 25 09:32:18 compute-1 podman[75652]: 2025-11-25 09:32:18.422638578 +0000 UTC m=+0.029939416 container create 89dfd3da92723e7a5eba1213de318ee88feec7e84c794e4f5cb7ca1ed286a428 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:32:18 compute-1 systemd[1]: Started libpod-conmon-89dfd3da92723e7a5eba1213de318ee88feec7e84c794e4f5cb7ca1ed286a428.scope.
Nov 25 09:32:18 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b259180e73547061247c8366c0e3695949ba124c72ea1c567477d3688646a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b259180e73547061247c8366c0e3695949ba124c72ea1c567477d3688646a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b259180e73547061247c8366c0e3695949ba124c72ea1c567477d3688646a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b259180e73547061247c8366c0e3695949ba124c72ea1c567477d3688646a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b259180e73547061247c8366c0e3695949ba124c72ea1c567477d3688646a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:18 compute-1 podman[75652]: 2025-11-25 09:32:18.479538293 +0000 UTC m=+0.086839121 container init 89dfd3da92723e7a5eba1213de318ee88feec7e84c794e4f5cb7ca1ed286a428 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_kapitsa, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 25 09:32:18 compute-1 podman[75652]: 2025-11-25 09:32:18.484843253 +0000 UTC m=+0.092144081 container start 89dfd3da92723e7a5eba1213de318ee88feec7e84c794e4f5cb7ca1ed286a428 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_kapitsa, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:32:18 compute-1 podman[75652]: 2025-11-25 09:32:18.486466372 +0000 UTC m=+0.093767231 container attach 89dfd3da92723e7a5eba1213de318ee88feec7e84c794e4f5cb7ca1ed286a428 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 25 09:32:18 compute-1 podman[75652]: 2025-11-25 09:32:18.410704894 +0000 UTC m=+0.018005752 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:18 compute-1 confident_kapitsa[75665]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:32:18 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:18 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:18 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 54eaf85f-4a96-4481-89e2-59a1f01c0d63
Nov 25 09:32:19 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 25 09:32:19 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 25 09:32:19 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 09:32:19 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:19 compute-1 lvm[75726]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:32:19 compute-1 lvm[75726]: VG ceph_vg0 finished
Nov 25 09:32:19 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 25 09:32:19 compute-1 confident_kapitsa[75665]:  stderr: got monmap epoch 1
Nov 25 09:32:19 compute-1 confident_kapitsa[75665]: --> Creating keyring file for osd.0
Nov 25 09:32:19 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 25 09:32:19 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 25 09:32:19 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 54eaf85f-4a96-4481-89e2-59a1f01c0d63 --setuser ceph --setgroup ceph
Nov 25 09:32:22 compute-1 confident_kapitsa[75665]:  stderr: 2025-11-25T09:32:19.589+0000 7f8bec892740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Nov 25 09:32:22 compute-1 confident_kapitsa[75665]:  stderr: 2025-11-25T09:32:19.853+0000 7f8bec892740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 25 09:32:22 compute-1 confident_kapitsa[75665]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 25 09:32:22 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 09:32:22 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 25 09:32:22 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:22 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:22 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 09:32:22 compute-1 confident_kapitsa[75665]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 09:32:22 compute-1 confident_kapitsa[75665]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 25 09:32:22 compute-1 confident_kapitsa[75665]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 25 09:32:22 compute-1 systemd[1]: libpod-89dfd3da92723e7a5eba1213de318ee88feec7e84c794e4f5cb7ca1ed286a428.scope: Deactivated successfully.
Nov 25 09:32:22 compute-1 systemd[1]: libpod-89dfd3da92723e7a5eba1213de318ee88feec7e84c794e4f5cb7ca1ed286a428.scope: Consumed 1.422s CPU time.
Nov 25 09:32:22 compute-1 podman[75652]: 2025-11-25 09:32:22.382649495 +0000 UTC m=+3.989950333 container died 89dfd3da92723e7a5eba1213de318ee88feec7e84c794e4f5cb7ca1ed286a428 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 25 09:32:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-825b259180e73547061247c8366c0e3695949ba124c72ea1c567477d3688646a-merged.mount: Deactivated successfully.
Nov 25 09:32:22 compute-1 podman[75652]: 2025-11-25 09:32:22.403004056 +0000 UTC m=+4.010304894 container remove 89dfd3da92723e7a5eba1213de318ee88feec7e84c794e4f5cb7ca1ed286a428 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_kapitsa, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 09:32:22 compute-1 systemd[1]: libpod-conmon-89dfd3da92723e7a5eba1213de318ee88feec7e84c794e4f5cb7ca1ed286a428.scope: Deactivated successfully.
Nov 25 09:32:22 compute-1 sudo[75558]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:22 compute-1 sudo[76658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:22 compute-1 sudo[76658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:22 compute-1 sudo[76658]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:22 compute-1 sudo[76683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:32:22 compute-1 sudo[76683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:22 compute-1 podman[76736]: 2025-11-25 09:32:22.779567838 +0000 UTC m=+0.026258284 container create ae69178fc940ac8c3d897c4038107a1c2568ee98ec2184c8cc692f985999c738 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_kapitsa, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 09:32:22 compute-1 systemd[1]: Started libpod-conmon-ae69178fc940ac8c3d897c4038107a1c2568ee98ec2184c8cc692f985999c738.scope.
Nov 25 09:32:22 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:22 compute-1 podman[76736]: 2025-11-25 09:32:22.836115725 +0000 UTC m=+0.082806181 container init ae69178fc940ac8c3d897c4038107a1c2568ee98ec2184c8cc692f985999c738 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:32:22 compute-1 podman[76736]: 2025-11-25 09:32:22.841045266 +0000 UTC m=+0.087735702 container start ae69178fc940ac8c3d897c4038107a1c2568ee98ec2184c8cc692f985999c738 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_kapitsa, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 25 09:32:22 compute-1 podman[76736]: 2025-11-25 09:32:22.842388442 +0000 UTC m=+0.089078899 container attach ae69178fc940ac8c3d897c4038107a1c2568ee98ec2184c8cc692f985999c738 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:32:22 compute-1 strange_kapitsa[76750]: 167 167
Nov 25 09:32:22 compute-1 systemd[1]: libpod-ae69178fc940ac8c3d897c4038107a1c2568ee98ec2184c8cc692f985999c738.scope: Deactivated successfully.
Nov 25 09:32:22 compute-1 podman[76736]: 2025-11-25 09:32:22.84454088 +0000 UTC m=+0.091231316 container died ae69178fc940ac8c3d897c4038107a1c2568ee98ec2184c8cc692f985999c738 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 25 09:32:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-9e97dcdc02b877c61403c4c6f90b71efee27a5578bc021dbaecdf386fc56c5e8-merged.mount: Deactivated successfully.
Nov 25 09:32:22 compute-1 podman[76736]: 2025-11-25 09:32:22.861504754 +0000 UTC m=+0.108195190 container remove ae69178fc940ac8c3d897c4038107a1c2568ee98ec2184c8cc692f985999c738 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 25 09:32:22 compute-1 podman[76736]: 2025-11-25 09:32:22.767893472 +0000 UTC m=+0.014583928 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:22 compute-1 systemd[1]: libpod-conmon-ae69178fc940ac8c3d897c4038107a1c2568ee98ec2184c8cc692f985999c738.scope: Deactivated successfully.
Nov 25 09:32:22 compute-1 podman[76773]: 2025-11-25 09:32:22.970748495 +0000 UTC m=+0.027051166 container create b260f8707b9ac83befbea9cca94fbfb4f8313b474fe294910cbcdbbfec01b727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_lovelace, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 25 09:32:22 compute-1 systemd[1]: Started libpod-conmon-b260f8707b9ac83befbea9cca94fbfb4f8313b474fe294910cbcdbbfec01b727.scope.
Nov 25 09:32:23 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42c81e2653d00461107b8142b19921a619f3eb5f32f65f4ee0e8b48e319f9897/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42c81e2653d00461107b8142b19921a619f3eb5f32f65f4ee0e8b48e319f9897/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42c81e2653d00461107b8142b19921a619f3eb5f32f65f4ee0e8b48e319f9897/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42c81e2653d00461107b8142b19921a619f3eb5f32f65f4ee0e8b48e319f9897/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:23 compute-1 podman[76773]: 2025-11-25 09:32:23.023872915 +0000 UTC m=+0.080175586 container init b260f8707b9ac83befbea9cca94fbfb4f8313b474fe294910cbcdbbfec01b727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_lovelace, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:32:23 compute-1 podman[76773]: 2025-11-25 09:32:23.029802286 +0000 UTC m=+0.086104957 container start b260f8707b9ac83befbea9cca94fbfb4f8313b474fe294910cbcdbbfec01b727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 25 09:32:23 compute-1 podman[76773]: 2025-11-25 09:32:23.031762222 +0000 UTC m=+0.088064914 container attach b260f8707b9ac83befbea9cca94fbfb4f8313b474fe294910cbcdbbfec01b727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:32:23 compute-1 podman[76773]: 2025-11-25 09:32:22.959981265 +0000 UTC m=+0.016283957 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]: {
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:     "0": [
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:         {
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:             "devices": [
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:                 "/dev/loop3"
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:             ],
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:             "lv_name": "ceph_lv0",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:             "lv_size": "21470642176",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fAal7q-0JUz-Eynh-QRlU-Oi29-m5yw-BRIlAz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=54eaf85f-4a96-4481-89e2-59a1f01c0d63,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:             "lv_uuid": "fAal7q-0JUz-Eynh-QRlU-Oi29-m5yw-BRIlAz",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:             "name": "ceph_lv0",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:             "tags": {
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:                 "ceph.block_uuid": "fAal7q-0JUz-Eynh-QRlU-Oi29-m5yw-BRIlAz",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:                 "ceph.cluster_name": "ceph",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:                 "ceph.crush_device_class": "",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:                 "ceph.encrypted": "0",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:                 "ceph.osd_fsid": "54eaf85f-4a96-4481-89e2-59a1f01c0d63",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:                 "ceph.osd_id": "0",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:                 "ceph.type": "block",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:                 "ceph.vdo": "0",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:                 "ceph.with_tpm": "0"
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:             },
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:             "type": "block",
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:             "vg_name": "ceph_vg0"
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:         }
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]:     ]
Nov 25 09:32:23 compute-1 vigilant_lovelace[76786]: }
Nov 25 09:32:23 compute-1 systemd[1]: libpod-b260f8707b9ac83befbea9cca94fbfb4f8313b474fe294910cbcdbbfec01b727.scope: Deactivated successfully.
Nov 25 09:32:23 compute-1 podman[76795]: 2025-11-25 09:32:23.282298678 +0000 UTC m=+0.017249151 container died b260f8707b9ac83befbea9cca94fbfb4f8313b474fe294910cbcdbbfec01b727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_lovelace, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:32:23 compute-1 systemd[1]: var-lib-containers-storage-overlay-42c81e2653d00461107b8142b19921a619f3eb5f32f65f4ee0e8b48e319f9897-merged.mount: Deactivated successfully.
Nov 25 09:32:23 compute-1 podman[76795]: 2025-11-25 09:32:23.300822595 +0000 UTC m=+0.035773048 container remove b260f8707b9ac83befbea9cca94fbfb4f8313b474fe294910cbcdbbfec01b727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:32:23 compute-1 systemd[1]: libpod-conmon-b260f8707b9ac83befbea9cca94fbfb4f8313b474fe294910cbcdbbfec01b727.scope: Deactivated successfully.
Nov 25 09:32:23 compute-1 sudo[76683]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:23 compute-1 sudo[76807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:23 compute-1 sudo[76807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:23 compute-1 sudo[76807]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:23 compute-1 sudo[76832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:32:23 compute-1 sudo[76832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:23 compute-1 podman[76891]: 2025-11-25 09:32:23.693878803 +0000 UTC m=+0.025094685 container create 829722b585e98e50e59b6816ffe06d638cba31a55a132b2168d7f25aabf94d8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:32:23 compute-1 systemd[1]: Started libpod-conmon-829722b585e98e50e59b6816ffe06d638cba31a55a132b2168d7f25aabf94d8e.scope.
Nov 25 09:32:23 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:23 compute-1 podman[76891]: 2025-11-25 09:32:23.745289039 +0000 UTC m=+0.076504941 container init 829722b585e98e50e59b6816ffe06d638cba31a55a132b2168d7f25aabf94d8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:32:23 compute-1 podman[76891]: 2025-11-25 09:32:23.749073606 +0000 UTC m=+0.080289489 container start 829722b585e98e50e59b6816ffe06d638cba31a55a132b2168d7f25aabf94d8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_thompson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:32:23 compute-1 podman[76891]: 2025-11-25 09:32:23.750483198 +0000 UTC m=+0.081699080 container attach 829722b585e98e50e59b6816ffe06d638cba31a55a132b2168d7f25aabf94d8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_thompson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 09:32:23 compute-1 nostalgic_thompson[76905]: 167 167
Nov 25 09:32:23 compute-1 systemd[1]: libpod-829722b585e98e50e59b6816ffe06d638cba31a55a132b2168d7f25aabf94d8e.scope: Deactivated successfully.
Nov 25 09:32:23 compute-1 conmon[76905]: conmon 829722b585e98e50e59b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-829722b585e98e50e59b6816ffe06d638cba31a55a132b2168d7f25aabf94d8e.scope/container/memory.events
Nov 25 09:32:23 compute-1 podman[76891]: 2025-11-25 09:32:23.75251592 +0000 UTC m=+0.083731823 container died 829722b585e98e50e59b6816ffe06d638cba31a55a132b2168d7f25aabf94d8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_thompson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 09:32:23 compute-1 systemd[1]: var-lib-containers-storage-overlay-1be1597f25b929090e618fa0e24d00caea668b99adf085337d548ac30e0f3ab2-merged.mount: Deactivated successfully.
Nov 25 09:32:23 compute-1 podman[76891]: 2025-11-25 09:32:23.769134144 +0000 UTC m=+0.100350026 container remove 829722b585e98e50e59b6816ffe06d638cba31a55a132b2168d7f25aabf94d8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_thompson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 09:32:23 compute-1 podman[76891]: 2025-11-25 09:32:23.683468084 +0000 UTC m=+0.014683986 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:23 compute-1 systemd[1]: libpod-conmon-829722b585e98e50e59b6816ffe06d638cba31a55a132b2168d7f25aabf94d8e.scope: Deactivated successfully.
Nov 25 09:32:23 compute-1 podman[76933]: 2025-11-25 09:32:23.944097572 +0000 UTC m=+0.027007513 container create 0d7ed8c50736b4280a009fef369afe55c142e84b40ad90b4ad8f3614cf39a832 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:32:23 compute-1 systemd[1]: Started libpod-conmon-0d7ed8c50736b4280a009fef369afe55c142e84b40ad90b4ad8f3614cf39a832.scope.
Nov 25 09:32:23 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab3d36c7e39739e24611e39634d226db40ca25e52f6c008479082b39fa2e786/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab3d36c7e39739e24611e39634d226db40ca25e52f6c008479082b39fa2e786/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab3d36c7e39739e24611e39634d226db40ca25e52f6c008479082b39fa2e786/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab3d36c7e39739e24611e39634d226db40ca25e52f6c008479082b39fa2e786/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab3d36c7e39739e24611e39634d226db40ca25e52f6c008479082b39fa2e786/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:24 compute-1 podman[76933]: 2025-11-25 09:32:24.005874313 +0000 UTC m=+0.088784254 container init 0d7ed8c50736b4280a009fef369afe55c142e84b40ad90b4ad8f3614cf39a832 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Nov 25 09:32:24 compute-1 podman[76933]: 2025-11-25 09:32:24.013117786 +0000 UTC m=+0.096027717 container start 0d7ed8c50736b4280a009fef369afe55c142e84b40ad90b4ad8f3614cf39a832 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:32:24 compute-1 podman[76933]: 2025-11-25 09:32:24.014487943 +0000 UTC m=+0.097397874 container attach 0d7ed8c50736b4280a009fef369afe55c142e84b40ad90b4ad8f3614cf39a832 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate-test, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:24 compute-1 podman[76933]: 2025-11-25 09:32:23.93297831 +0000 UTC m=+0.015888262 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate-test[76946]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Nov 25 09:32:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate-test[76946]:                             [--no-systemd] [--no-tmpfs]
Nov 25 09:32:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate-test[76946]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 09:32:24 compute-1 systemd[1]: libpod-0d7ed8c50736b4280a009fef369afe55c142e84b40ad90b4ad8f3614cf39a832.scope: Deactivated successfully.
Nov 25 09:32:24 compute-1 podman[76933]: 2025-11-25 09:32:24.160494574 +0000 UTC m=+0.243404505 container died 0d7ed8c50736b4280a009fef369afe55c142e84b40ad90b4ad8f3614cf39a832 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:32:24 compute-1 systemd[1]: var-lib-containers-storage-overlay-3ab3d36c7e39739e24611e39634d226db40ca25e52f6c008479082b39fa2e786-merged.mount: Deactivated successfully.
Nov 25 09:32:24 compute-1 podman[76933]: 2025-11-25 09:32:24.18205961 +0000 UTC m=+0.264969541 container remove 0d7ed8c50736b4280a009fef369afe55c142e84b40ad90b4ad8f3614cf39a832 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate-test, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 25 09:32:24 compute-1 systemd[1]: libpod-conmon-0d7ed8c50736b4280a009fef369afe55c142e84b40ad90b4ad8f3614cf39a832.scope: Deactivated successfully.
Nov 25 09:32:24 compute-1 systemd[1]: Reloading.
Nov 25 09:32:24 compute-1 systemd-rc-local-generator[77000]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:32:24 compute-1 systemd-sysv-generator[77004]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:32:24 compute-1 systemd[1]: Reloading.
Nov 25 09:32:24 compute-1 systemd-sysv-generator[77043]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:32:24 compute-1 systemd-rc-local-generator[77040]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:32:24 compute-1 systemd[1]: Starting Ceph osd.0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:32:24 compute-1 podman[77094]: 2025-11-25 09:32:24.888493959 +0000 UTC m=+0.026761220 container create 47ed14da5021a51f7f730a62c6123f0fd951eaf7b4cd6d9cdf2fc28d9fce5821 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:32:24 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4e21a09cc1d2df4ec4fd4915dc3eb809dcd56f519cd0f77b4324f6b6b53d263/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4e21a09cc1d2df4ec4fd4915dc3eb809dcd56f519cd0f77b4324f6b6b53d263/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4e21a09cc1d2df4ec4fd4915dc3eb809dcd56f519cd0f77b4324f6b6b53d263/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4e21a09cc1d2df4ec4fd4915dc3eb809dcd56f519cd0f77b4324f6b6b53d263/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4e21a09cc1d2df4ec4fd4915dc3eb809dcd56f519cd0f77b4324f6b6b53d263/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:24 compute-1 podman[77094]: 2025-11-25 09:32:24.940074535 +0000 UTC m=+0.078341796 container init 47ed14da5021a51f7f730a62c6123f0fd951eaf7b4cd6d9cdf2fc28d9fce5821 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 25 09:32:24 compute-1 podman[77094]: 2025-11-25 09:32:24.946384322 +0000 UTC m=+0.084651572 container start 47ed14da5021a51f7f730a62c6123f0fd951eaf7b4cd6d9cdf2fc28d9fce5821 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 25 09:32:24 compute-1 podman[77094]: 2025-11-25 09:32:24.947580841 +0000 UTC m=+0.085848092 container attach 47ed14da5021a51f7f730a62c6123f0fd951eaf7b4cd6d9cdf2fc28d9fce5821 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:32:24 compute-1 podman[77094]: 2025-11-25 09:32:24.877836795 +0000 UTC m=+0.016104067 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate[77106]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-1 bash[77094]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate[77106]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-1 bash[77094]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-1 lvm[77188]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:32:25 compute-1 lvm[77188]: VG ceph_vg0 finished
Nov 25 09:32:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate[77106]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 25 09:32:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate[77106]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-1 bash[77094]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 25 09:32:25 compute-1 bash[77094]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-1 lvm[77192]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:32:25 compute-1 lvm[77192]: VG ceph_vg0 finished
Nov 25 09:32:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate[77106]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-1 bash[77094]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate[77106]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 09:32:25 compute-1 bash[77094]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 09:32:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate[77106]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 25 09:32:25 compute-1 bash[77094]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 25 09:32:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate[77106]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:25 compute-1 bash[77094]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate[77106]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:25 compute-1 bash[77094]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate[77106]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 09:32:25 compute-1 bash[77094]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 09:32:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate[77106]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 09:32:25 compute-1 bash[77094]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 09:32:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate[77106]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 25 09:32:25 compute-1 bash[77094]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 25 09:32:25 compute-1 systemd[1]: libpod-47ed14da5021a51f7f730a62c6123f0fd951eaf7b4cd6d9cdf2fc28d9fce5821.scope: Deactivated successfully.
Nov 25 09:32:25 compute-1 podman[77094]: 2025-11-25 09:32:25.865844291 +0000 UTC m=+1.004111542 container died 47ed14da5021a51f7f730a62c6123f0fd951eaf7b4cd6d9cdf2fc28d9fce5821 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:32:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-e4e21a09cc1d2df4ec4fd4915dc3eb809dcd56f519cd0f77b4324f6b6b53d263-merged.mount: Deactivated successfully.
Nov 25 09:32:25 compute-1 podman[77094]: 2025-11-25 09:32:25.890187864 +0000 UTC m=+1.028455114 container remove 47ed14da5021a51f7f730a62c6123f0fd951eaf7b4cd6d9cdf2fc28d9fce5821 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Nov 25 09:32:26 compute-1 podman[77338]: 2025-11-25 09:32:26.023822135 +0000 UTC m=+0.025343514 container create 84467a07d50d668349bcb31ef8a91b2755a089e026404cdefe6cf8b796bb25b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:32:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c13e9b757b4238d1e45cd685c8eb8451301f5f93512fa2d42b9bb1acd949e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c13e9b757b4238d1e45cd685c8eb8451301f5f93512fa2d42b9bb1acd949e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c13e9b757b4238d1e45cd685c8eb8451301f5f93512fa2d42b9bb1acd949e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c13e9b757b4238d1e45cd685c8eb8451301f5f93512fa2d42b9bb1acd949e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c13e9b757b4238d1e45cd685c8eb8451301f5f93512fa2d42b9bb1acd949e1/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-1 podman[77338]: 2025-11-25 09:32:26.065572474 +0000 UTC m=+0.067093853 container init 84467a07d50d668349bcb31ef8a91b2755a089e026404cdefe6cf8b796bb25b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:26 compute-1 podman[77338]: 2025-11-25 09:32:26.070858584 +0000 UTC m=+0.072379964 container start 84467a07d50d668349bcb31ef8a91b2755a089e026404cdefe6cf8b796bb25b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:32:26 compute-1 bash[77338]: 84467a07d50d668349bcb31ef8a91b2755a089e026404cdefe6cf8b796bb25b0
Nov 25 09:32:26 compute-1 podman[77338]: 2025-11-25 09:32:26.013368214 +0000 UTC m=+0.014889603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:26 compute-1 systemd[1]: Started Ceph osd.0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:32:26 compute-1 ceph-osd[77354]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 09:32:26 compute-1 ceph-osd[77354]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Nov 25 09:32:26 compute-1 ceph-osd[77354]: pidfile_write: ignore empty --pid-file
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 09:32:26 compute-1 sudo[76832]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:26 compute-1 sudo[77366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:26 compute-1 sudo[77366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:26 compute-1 sudo[77366]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:26 compute-1 sudo[77391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:32:26 compute-1 sudo[77391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 09:32:26 compute-1 podman[77451]: 2025-11-25 09:32:26.472091311 +0000 UTC m=+0.026924116 container create a39bb95fbaec90931f90a81785816744a4971800f453baeff7252319bf3a11fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_jackson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:26 compute-1 systemd[1]: Started libpod-conmon-a39bb95fbaec90931f90a81785816744a4971800f453baeff7252319bf3a11fd.scope.
Nov 25 09:32:26 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:26 compute-1 podman[77451]: 2025-11-25 09:32:26.52584805 +0000 UTC m=+0.080680855 container init a39bb95fbaec90931f90a81785816744a4971800f453baeff7252319bf3a11fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:32:26 compute-1 podman[77451]: 2025-11-25 09:32:26.530056263 +0000 UTC m=+0.084889059 container start a39bb95fbaec90931f90a81785816744a4971800f453baeff7252319bf3a11fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_jackson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:26 compute-1 podman[77451]: 2025-11-25 09:32:26.531129872 +0000 UTC m=+0.085962679 container attach a39bb95fbaec90931f90a81785816744a4971800f453baeff7252319bf3a11fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_jackson, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:32:26 compute-1 funny_jackson[77464]: 167 167
Nov 25 09:32:26 compute-1 systemd[1]: libpod-a39bb95fbaec90931f90a81785816744a4971800f453baeff7252319bf3a11fd.scope: Deactivated successfully.
Nov 25 09:32:26 compute-1 podman[77451]: 2025-11-25 09:32:26.534027742 +0000 UTC m=+0.088860548 container died a39bb95fbaec90931f90a81785816744a4971800f453baeff7252319bf3a11fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_jackson, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:32:26 compute-1 systemd[1]: var-lib-containers-storage-overlay-e4c1db445f7d48c40c7c820832acaef64752541f6a87c08325d0c1d16aaa06fb-merged.mount: Deactivated successfully.
Nov 25 09:32:26 compute-1 podman[77451]: 2025-11-25 09:32:26.549202972 +0000 UTC m=+0.104035768 container remove a39bb95fbaec90931f90a81785816744a4971800f453baeff7252319bf3a11fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_jackson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 09:32:26 compute-1 podman[77451]: 2025-11-25 09:32:26.461249149 +0000 UTC m=+0.016081965 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:26 compute-1 systemd[1]: libpod-conmon-a39bb95fbaec90931f90a81785816744a4971800f453baeff7252319bf3a11fd.scope: Deactivated successfully.
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 09:32:26 compute-1 podman[77485]: 2025-11-25 09:32:26.66346968 +0000 UTC m=+0.030293173 container create 5cbddf79b2f166ffbf41e1a0338d8918d261de0869d22dcb86bebf99d3c26e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:32:26 compute-1 systemd[1]: Started libpod-conmon-5cbddf79b2f166ffbf41e1a0338d8918d261de0869d22dcb86bebf99d3c26e22.scope.
Nov 25 09:32:26 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6549f31846f9d75953cd5e53209f45d85fb35edd9b03535550c34de363ebdad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6549f31846f9d75953cd5e53209f45d85fb35edd9b03535550c34de363ebdad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6549f31846f9d75953cd5e53209f45d85fb35edd9b03535550c34de363ebdad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6549f31846f9d75953cd5e53209f45d85fb35edd9b03535550c34de363ebdad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-1 podman[77485]: 2025-11-25 09:32:26.727185357 +0000 UTC m=+0.094008850 container init 5cbddf79b2f166ffbf41e1a0338d8918d261de0869d22dcb86bebf99d3c26e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_volhard, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:32:26 compute-1 podman[77485]: 2025-11-25 09:32:26.731978442 +0000 UTC m=+0.098801935 container start 5cbddf79b2f166ffbf41e1a0338d8918d261de0869d22dcb86bebf99d3c26e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:32:26 compute-1 podman[77485]: 2025-11-25 09:32:26.73314733 +0000 UTC m=+0.099970824 container attach 5cbddf79b2f166ffbf41e1a0338d8918d261de0869d22dcb86bebf99d3c26e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_volhard, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:26 compute-1 podman[77485]: 2025-11-25 09:32:26.64845934 +0000 UTC m=+0.015282843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:26 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 09:32:27 compute-1 lvm[77576]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:32:27 compute-1 lvm[77576]: VG ceph_vg0 finished
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d046bc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d046bc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d046bc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d046bc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d046bc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 09:32:27 compute-1 funny_volhard[77500]: {}
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d046b800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 09:32:27 compute-1 systemd[1]: libpod-5cbddf79b2f166ffbf41e1a0338d8918d261de0869d22dcb86bebf99d3c26e22.scope: Deactivated successfully.
Nov 25 09:32:27 compute-1 podman[77485]: 2025-11-25 09:32:27.223848025 +0000 UTC m=+0.590671518 container died 5cbddf79b2f166ffbf41e1a0338d8918d261de0869d22dcb86bebf99d3c26e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_volhard, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 09:32:27 compute-1 systemd[1]: var-lib-containers-storage-overlay-b6549f31846f9d75953cd5e53209f45d85fb35edd9b03535550c34de363ebdad-merged.mount: Deactivated successfully.
Nov 25 09:32:27 compute-1 podman[77485]: 2025-11-25 09:32:27.24340232 +0000 UTC m=+0.610225813 container remove 5cbddf79b2f166ffbf41e1a0338d8918d261de0869d22dcb86bebf99d3c26e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_volhard, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 09:32:27 compute-1 systemd[1]: libpod-conmon-5cbddf79b2f166ffbf41e1a0338d8918d261de0869d22dcb86bebf99d3c26e22.scope: Deactivated successfully.
Nov 25 09:32:27 compute-1 sudo[77391]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:27 compute-1 ceph-osd[77354]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 25 09:32:27 compute-1 ceph-osd[77354]: load: jerasure load: lrc 
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 09:32:27 compute-1 sudo[77597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:32:27 compute-1 sudo[77597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:27 compute-1 sudo[77597]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:27 compute-1 sudo[77622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:27 compute-1 sudo[77622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:27 compute-1 sudo[77622]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:27 compute-1 sudo[77647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:32:27 compute-1 sudo[77647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 09:32:27 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 09:32:28 compute-1 ceph-osd[77354]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 09:32:28 compute-1 ceph-osd[77354]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 09:32:28 compute-1 podman[77730]: 2025-11-25 09:32:28.046436462 +0000 UTC m=+0.039334737 container exec 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 09:32:28 compute-1 podman[77730]: 2025-11-25 09:32:28.125617736 +0000 UTC m=+0.118515991 container exec_died 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 25 09:32:28 compute-1 sudo[77647]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:28 compute-1 sudo[77784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:28 compute-1 sudo[77784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:28 compute-1 sudo[77784]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 09:32:28 compute-1 sudo[77809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- inventory --format=json-pretty --filter-for-batch
Nov 25 09:32:28 compute-1 sudo[77809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133d000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133d000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133d000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133d000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs mount
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs mount shared_bdev_used = 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: RocksDB version: 7.9.2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Git sha 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: DB SUMMARY
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: DB Session ID:  MCVFDXPNT40ILIQKGN65
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: CURRENT file:  CURRENT
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                         Options.error_if_exists: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.create_if_missing: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                                     Options.env: 0x5584d12d7dc0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                                Options.info_log: 0x5584d12db7a0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                              Options.statistics: (nil)
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.use_fsync: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                              Options.db_log_dir: 
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.write_buffer_manager: 0x5584d1406a00
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.unordered_write: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.row_cache: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                              Options.wal_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.two_write_queues: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.wal_compression: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.atomic_flush: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.max_background_jobs: 4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.max_background_compactions: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.max_subcompactions: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.max_open_files: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Compression algorithms supported:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kZSTD supported: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kXpressCompression supported: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kBZip2Compression supported: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kLZ4Compression supported: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kZlibCompression supported: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kSnappyCompression supported: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12dbb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d0501350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12dbb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d0501350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12dbb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d0501350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12dbb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d0501350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12dbb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d0501350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12dbb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d0501350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12dbb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d0501350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 podman[77868]: 2025-11-25 09:32:28.577111486 +0000 UTC m=+0.037240508 container create 189e8ffab8073174637d6af93f5482e8c018cb9da9a0ada9173701f223705dfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shamir, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12dbb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d05009b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12dbb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d05009b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12dbb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d05009b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dc4c01c7-57cb-425f-8d06-bf7ebfc17417
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063148569821, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063148570008, "job": 1, "event": "recovery_finished"}
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: freelist init
Nov 25 09:32:28 compute-1 ceph-osd[77354]: freelist _read_cfg
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs umount
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133d000 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 09:32:28 compute-1 systemd[1]: Started libpod-conmon-189e8ffab8073174637d6af93f5482e8c018cb9da9a0ada9173701f223705dfc.scope.
Nov 25 09:32:28 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:28 compute-1 podman[77868]: 2025-11-25 09:32:28.623500427 +0000 UTC m=+0.083629470 container init 189e8ffab8073174637d6af93f5482e8c018cb9da9a0ada9173701f223705dfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shamir, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 25 09:32:28 compute-1 podman[77868]: 2025-11-25 09:32:28.627561455 +0000 UTC m=+0.087690477 container start 189e8ffab8073174637d6af93f5482e8c018cb9da9a0ada9173701f223705dfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:32:28 compute-1 podman[77868]: 2025-11-25 09:32:28.628807999 +0000 UTC m=+0.088937041 container attach 189e8ffab8073174637d6af93f5482e8c018cb9da9a0ada9173701f223705dfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shamir, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:32:28 compute-1 sleepy_shamir[78069]: 167 167
Nov 25 09:32:28 compute-1 systemd[1]: libpod-189e8ffab8073174637d6af93f5482e8c018cb9da9a0ada9173701f223705dfc.scope: Deactivated successfully.
Nov 25 09:32:28 compute-1 podman[77868]: 2025-11-25 09:32:28.631279949 +0000 UTC m=+0.091408970 container died 189e8ffab8073174637d6af93f5482e8c018cb9da9a0ada9173701f223705dfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shamir, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:32:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-d90ca5ec9634a657c7d1918dbc49b09518d90cde9e8ae8890f5f0eed10fe20de-merged.mount: Deactivated successfully.
Nov 25 09:32:28 compute-1 podman[77868]: 2025-11-25 09:32:28.649800059 +0000 UTC m=+0.109929080 container remove 189e8ffab8073174637d6af93f5482e8c018cb9da9a0ada9173701f223705dfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shamir, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:32:28 compute-1 podman[77868]: 2025-11-25 09:32:28.558963234 +0000 UTC m=+0.019092276 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:28 compute-1 systemd[1]: libpod-conmon-189e8ffab8073174637d6af93f5482e8c018cb9da9a0ada9173701f223705dfc.scope: Deactivated successfully.
Nov 25 09:32:28 compute-1 podman[78091]: 2025-11-25 09:32:28.755886802 +0000 UTC m=+0.024896563 container create 85225f90338a87ca38bb9b3d60fb56b354e5f1b6bb15be7c70ff1c11c799358a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_austin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:32:28 compute-1 systemd[1]: Started libpod-conmon-85225f90338a87ca38bb9b3d60fb56b354e5f1b6bb15be7c70ff1c11c799358a.scope.
Nov 25 09:32:28 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6659aed58a625b1b0ca8f3cacf85c1aa68e362755462027c5c2f06964beee10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6659aed58a625b1b0ca8f3cacf85c1aa68e362755462027c5c2f06964beee10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6659aed58a625b1b0ca8f3cacf85c1aa68e362755462027c5c2f06964beee10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6659aed58a625b1b0ca8f3cacf85c1aa68e362755462027c5c2f06964beee10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:28 compute-1 podman[78091]: 2025-11-25 09:32:28.801270113 +0000 UTC m=+0.070279894 container init 85225f90338a87ca38bb9b3d60fb56b354e5f1b6bb15be7c70ff1c11c799358a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_austin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:32:28 compute-1 podman[78091]: 2025-11-25 09:32:28.806233367 +0000 UTC m=+0.075243127 container start 85225f90338a87ca38bb9b3d60fb56b354e5f1b6bb15be7c70ff1c11c799358a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_austin, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 25 09:32:28 compute-1 podman[78091]: 2025-11-25 09:32:28.807363944 +0000 UTC m=+0.076373704 container attach 85225f90338a87ca38bb9b3d60fb56b354e5f1b6bb15be7c70ff1c11c799358a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_austin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133d000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133d000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133d000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bdev(0x5584d133d000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs mount
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluefs mount shared_bdev_used = 4718592
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: RocksDB version: 7.9.2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Git sha 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: DB SUMMARY
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: DB Session ID:  MCVFDXPNT40ILIQKGN64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: CURRENT file:  CURRENT
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                         Options.error_if_exists: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.create_if_missing: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                                     Options.env: 0x5584d14aa310
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                                Options.info_log: 0x5584d12db940
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                              Options.statistics: (nil)
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.use_fsync: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                              Options.db_log_dir: 
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.write_buffer_manager: 0x5584d1406a00
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.unordered_write: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.row_cache: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                              Options.wal_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.two_write_queues: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.wal_compression: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.atomic_flush: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.max_background_jobs: 4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.max_background_compactions: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.max_subcompactions: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.max_open_files: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Compression algorithms supported:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kZSTD supported: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kXpressCompression supported: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kBZip2Compression supported: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kLZ4Compression supported: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kZlibCompression supported: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         kSnappyCompression supported: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12db680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d0501350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12db680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d0501350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12db680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d0501350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12db680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d0501350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12db680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d0501350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12db680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d0501350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12db680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d0501350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12dbac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d05009b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 podman[78091]: 2025-11-25 09:32:28.745854946 +0000 UTC m=+0.014864726 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12dbac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d05009b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5584d12dbac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5584d05009b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dc4c01c7-57cb-425f-8d06-bf7ebfc17417
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063148856212, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063148857963, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063148, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc4c01c7-57cb-425f-8d06-bf7ebfc17417", "db_session_id": "MCVFDXPNT40ILIQKGN64", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063148858861, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063148, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc4c01c7-57cb-425f-8d06-bf7ebfc17417", "db_session_id": "MCVFDXPNT40ILIQKGN64", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063148861263, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063148, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc4c01c7-57cb-425f-8d06-bf7ebfc17417", "db_session_id": "MCVFDXPNT40ILIQKGN64", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063148861813, "job": 1, "event": "recovery_finished"}
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5584d14d8000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: DB pointer 0x5584d14b8000
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 25 09:32:28 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:32:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 09:32:28 compute-1 ceph-osd[77354]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 09:32:28 compute-1 ceph-osd[77354]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 09:32:28 compute-1 ceph-osd[77354]: _get_class not permitted to load lua
Nov 25 09:32:28 compute-1 ceph-osd[77354]: _get_class not permitted to load sdk
Nov 25 09:32:28 compute-1 ceph-osd[77354]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 09:32:28 compute-1 ceph-osd[77354]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 09:32:28 compute-1 ceph-osd[77354]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 09:32:28 compute-1 ceph-osd[77354]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 09:32:28 compute-1 ceph-osd[77354]: osd.0 0 load_pgs
Nov 25 09:32:28 compute-1 ceph-osd[77354]: osd.0 0 load_pgs opened 0 pgs
Nov 25 09:32:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0[77350]: 2025-11-25T09:32:28.872+0000 7fac94586740 -1 osd.0 0 log_to_monitors true
Nov 25 09:32:28 compute-1 ceph-osd[77354]: osd.0 0 log_to_monitors true
Nov 25 09:32:29 compute-1 happy_austin[78104]: [
Nov 25 09:32:29 compute-1 happy_austin[78104]:     {
Nov 25 09:32:29 compute-1 happy_austin[78104]:         "available": false,
Nov 25 09:32:29 compute-1 happy_austin[78104]:         "being_replaced": false,
Nov 25 09:32:29 compute-1 happy_austin[78104]:         "ceph_device_lvm": false,
Nov 25 09:32:29 compute-1 happy_austin[78104]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 09:32:29 compute-1 happy_austin[78104]:         "lsm_data": {},
Nov 25 09:32:29 compute-1 happy_austin[78104]:         "lvs": [],
Nov 25 09:32:29 compute-1 happy_austin[78104]:         "path": "/dev/sr0",
Nov 25 09:32:29 compute-1 happy_austin[78104]:         "rejected_reasons": [
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "Insufficient space (<5GB)",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "Has a FileSystem"
Nov 25 09:32:29 compute-1 happy_austin[78104]:         ],
Nov 25 09:32:29 compute-1 happy_austin[78104]:         "sys_api": {
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "actuators": null,
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "device_nodes": [
Nov 25 09:32:29 compute-1 happy_austin[78104]:                 "sr0"
Nov 25 09:32:29 compute-1 happy_austin[78104]:             ],
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "devname": "sr0",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "human_readable_size": "474.00 KB",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "id_bus": "ata",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "model": "QEMU DVD-ROM",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "nr_requests": "64",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "parent": "/dev/sr0",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "partitions": {},
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "path": "/dev/sr0",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "removable": "1",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "rev": "2.5+",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "ro": "0",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "rotational": "1",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "sas_address": "",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "sas_device_handle": "",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "scheduler_mode": "mq-deadline",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "sectors": 0,
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "sectorsize": "2048",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "size": 485376.0,
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "support_discard": "2048",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "type": "disk",
Nov 25 09:32:29 compute-1 happy_austin[78104]:             "vendor": "QEMU"
Nov 25 09:32:29 compute-1 happy_austin[78104]:         }
Nov 25 09:32:29 compute-1 happy_austin[78104]:     }
Nov 25 09:32:29 compute-1 happy_austin[78104]: ]
Nov 25 09:32:29 compute-1 systemd[1]: libpod-85225f90338a87ca38bb9b3d60fb56b354e5f1b6bb15be7c70ff1c11c799358a.scope: Deactivated successfully.
Nov 25 09:32:29 compute-1 podman[79337]: 2025-11-25 09:32:29.256232146 +0000 UTC m=+0.015789916 container died 85225f90338a87ca38bb9b3d60fb56b354e5f1b6bb15be7c70ff1c11c799358a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:29 compute-1 systemd[1]: var-lib-containers-storage-overlay-a6659aed58a625b1b0ca8f3cacf85c1aa68e362755462027c5c2f06964beee10-merged.mount: Deactivated successfully.
Nov 25 09:32:29 compute-1 podman[79337]: 2025-11-25 09:32:29.274064754 +0000 UTC m=+0.033622524 container remove 85225f90338a87ca38bb9b3d60fb56b354e5f1b6bb15be7c70ff1c11c799358a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:32:29 compute-1 systemd[1]: libpod-conmon-85225f90338a87ca38bb9b3d60fb56b354e5f1b6bb15be7c70ff1c11c799358a.scope: Deactivated successfully.
Nov 25 09:32:29 compute-1 sudo[77809]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:29 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 09:32:29 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 09:32:30 compute-1 ceph-osd[77354]: osd.0 0 done with init, starting boot process
Nov 25 09:32:30 compute-1 ceph-osd[77354]: osd.0 0 start_boot
Nov 25 09:32:30 compute-1 ceph-osd[77354]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 09:32:30 compute-1 ceph-osd[77354]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 09:32:30 compute-1 ceph-osd[77354]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 09:32:30 compute-1 ceph-osd[77354]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 09:32:30 compute-1 ceph-osd[77354]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 25 09:32:32 compute-1 ceph-osd[77354]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 93.157 iops: 23848.116 elapsed_sec: 0.126
Nov 25 09:32:32 compute-1 ceph-osd[77354]: log_channel(cluster) log [WRN] : OSD bench result of 23848.115549 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 09:32:32 compute-1 ceph-osd[77354]: osd.0 0 waiting for initial osdmap
Nov 25 09:32:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0[77350]: 2025-11-25T09:32:32.008+0000 7fac90509640 -1 osd.0 0 waiting for initial osdmap
Nov 25 09:32:32 compute-1 ceph-osd[77354]: osd.0 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 25 09:32:32 compute-1 ceph-osd[77354]: osd.0 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 25 09:32:32 compute-1 ceph-osd[77354]: osd.0 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 25 09:32:32 compute-1 ceph-osd[77354]: osd.0 7 check_osdmap_features require_osd_release unknown -> squid
Nov 25 09:32:32 compute-1 ceph-osd[77354]: osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 09:32:32 compute-1 ceph-osd[77354]: osd.0 7 set_numa_affinity not setting numa affinity
Nov 25 09:32:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-0[77350]: 2025-11-25T09:32:32.029+0000 7fac8bb31640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 09:32:32 compute-1 ceph-osd[77354]: osd.0 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Nov 25 09:32:32 compute-1 ceph-osd[77354]: osd.0 8 state: booting -> active
Nov 25 09:32:33 compute-1 ceph-osd[77354]: osd.0 9 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 09:32:33 compute-1 ceph-osd[77354]: osd.0 9 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 25 09:32:33 compute-1 ceph-osd[77354]: osd.0 9 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 09:32:33 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 9 pg[1.0( empty local-lis/les=0/0 n=0 ec=9/9 lis/c=0/0 les/c/f=0/0/0 sis=9) [0] r=0 lpr=9 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:32:34 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 10 pg[1.0( empty local-lis/les=9/10 n=0 ec=9/9 lis/c=0/0 les/c/f=0/0/0 sis=9) [0] r=0 lpr=9 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:32:55 compute-1 sudo[79348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:55 compute-1 sudo[79348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:55 compute-1 sudo[79348]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:55 compute-1 sudo[79373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:32:55 compute-1 sudo[79373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:56 compute-1 podman[79432]: 2025-11-25 09:32:56.163011995 +0000 UTC m=+0.026309523 container create 893d890dc2111703daa6a7b493cc0a829dec0ef897d92a7005c6f5d87170c8bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_moore, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:32:56 compute-1 systemd[1]: Started libpod-conmon-893d890dc2111703daa6a7b493cc0a829dec0ef897d92a7005c6f5d87170c8bf.scope.
Nov 25 09:32:56 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:56 compute-1 podman[79432]: 2025-11-25 09:32:56.211395537 +0000 UTC m=+0.074693065 container init 893d890dc2111703daa6a7b493cc0a829dec0ef897d92a7005c6f5d87170c8bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:32:56 compute-1 podman[79432]: 2025-11-25 09:32:56.215909145 +0000 UTC m=+0.079206674 container start 893d890dc2111703daa6a7b493cc0a829dec0ef897d92a7005c6f5d87170c8bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_moore, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:32:56 compute-1 podman[79432]: 2025-11-25 09:32:56.217055467 +0000 UTC m=+0.080352995 container attach 893d890dc2111703daa6a7b493cc0a829dec0ef897d92a7005c6f5d87170c8bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_moore, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:32:56 compute-1 mystifying_moore[79445]: 167 167
Nov 25 09:32:56 compute-1 systemd[1]: libpod-893d890dc2111703daa6a7b493cc0a829dec0ef897d92a7005c6f5d87170c8bf.scope: Deactivated successfully.
Nov 25 09:32:56 compute-1 podman[79432]: 2025-11-25 09:32:56.219166385 +0000 UTC m=+0.082463913 container died 893d890dc2111703daa6a7b493cc0a829dec0ef897d92a7005c6f5d87170c8bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_moore, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:32:56 compute-1 systemd[1]: var-lib-containers-storage-overlay-a5c0ee131068d95fa931abc57871d2d4f656d5d2db762fc193465f6fa7e25911-merged.mount: Deactivated successfully.
Nov 25 09:32:56 compute-1 podman[79432]: 2025-11-25 09:32:56.240840701 +0000 UTC m=+0.104138229 container remove 893d890dc2111703daa6a7b493cc0a829dec0ef897d92a7005c6f5d87170c8bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 25 09:32:56 compute-1 podman[79432]: 2025-11-25 09:32:56.151100933 +0000 UTC m=+0.014398481 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:56 compute-1 systemd[1]: libpod-conmon-893d890dc2111703daa6a7b493cc0a829dec0ef897d92a7005c6f5d87170c8bf.scope: Deactivated successfully.
Nov 25 09:32:56 compute-1 podman[79460]: 2025-11-25 09:32:56.283357194 +0000 UTC m=+0.026978755 container create dbf1c3298bedd9cadceba69e430ea5e696300c8e08909876d4e81ab6c1ae311a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_clarke, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 25 09:32:56 compute-1 systemd[1]: Started libpod-conmon-dbf1c3298bedd9cadceba69e430ea5e696300c8e08909876d4e81ab6c1ae311a.scope.
Nov 25 09:32:56 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:32:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3065f45a9236901f056dffcc793fffdadcbf7837d1e2181aeac4b74094b52e7a/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3065f45a9236901f056dffcc793fffdadcbf7837d1e2181aeac4b74094b52e7a/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3065f45a9236901f056dffcc793fffdadcbf7837d1e2181aeac4b74094b52e7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3065f45a9236901f056dffcc793fffdadcbf7837d1e2181aeac4b74094b52e7a/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:56 compute-1 podman[79460]: 2025-11-25 09:32:56.335888124 +0000 UTC m=+0.079509705 container init dbf1c3298bedd9cadceba69e430ea5e696300c8e08909876d4e81ab6c1ae311a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:56 compute-1 podman[79460]: 2025-11-25 09:32:56.339997821 +0000 UTC m=+0.083619382 container start dbf1c3298bedd9cadceba69e430ea5e696300c8e08909876d4e81ab6c1ae311a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 09:32:56 compute-1 podman[79460]: 2025-11-25 09:32:56.34107473 +0000 UTC m=+0.084696291 container attach dbf1c3298bedd9cadceba69e430ea5e696300c8e08909876d4e81ab6c1ae311a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_clarke, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:32:56 compute-1 podman[79460]: 2025-11-25 09:32:56.271955682 +0000 UTC m=+0.015577264 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:56 compute-1 systemd[1]: libpod-dbf1c3298bedd9cadceba69e430ea5e696300c8e08909876d4e81ab6c1ae311a.scope: Deactivated successfully.
Nov 25 09:32:56 compute-1 podman[79460]: 2025-11-25 09:32:56.379639996 +0000 UTC m=+0.123261556 container died dbf1c3298bedd9cadceba69e430ea5e696300c8e08909876d4e81ab6c1ae311a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 25 09:32:56 compute-1 systemd[1]: var-lib-containers-storage-overlay-3065f45a9236901f056dffcc793fffdadcbf7837d1e2181aeac4b74094b52e7a-merged.mount: Deactivated successfully.
Nov 25 09:32:56 compute-1 podman[79460]: 2025-11-25 09:32:56.399636168 +0000 UTC m=+0.143257729 container remove dbf1c3298bedd9cadceba69e430ea5e696300c8e08909876d4e81ab6c1ae311a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:32:56 compute-1 systemd[1]: libpod-conmon-dbf1c3298bedd9cadceba69e430ea5e696300c8e08909876d4e81ab6c1ae311a.scope: Deactivated successfully.
Nov 25 09:32:56 compute-1 systemd[1]: Reloading.
Nov 25 09:32:56 compute-1 systemd-sysv-generator[79539]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:32:56 compute-1 systemd-rc-local-generator[79534]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:32:56 compute-1 systemd[1]: Reloading.
Nov 25 09:32:56 compute-1 systemd-rc-local-generator[79571]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:32:56 compute-1 systemd-sysv-generator[79574]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:32:56 compute-1 systemd[1]: Starting Ceph mon.compute-1 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:32:56 compute-1 podman[79627]: 2025-11-25 09:32:56.96706292 +0000 UTC m=+0.027533321 container create 20d26b9df30e6dee712c4344490034e2145fc1c0e39d9882ab08daf2019cec00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:32:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63a7be66ddd2d9ad497dd04d95a1dc0487c26dac65e6e0f2e9350caa59c981e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63a7be66ddd2d9ad497dd04d95a1dc0487c26dac65e6e0f2e9350caa59c981e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63a7be66ddd2d9ad497dd04d95a1dc0487c26dac65e6e0f2e9350caa59c981e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63a7be66ddd2d9ad497dd04d95a1dc0487c26dac65e6e0f2e9350caa59c981e/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:57 compute-1 podman[79627]: 2025-11-25 09:32:57.007693937 +0000 UTC m=+0.068164360 container init 20d26b9df30e6dee712c4344490034e2145fc1c0e39d9882ab08daf2019cec00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:57 compute-1 podman[79627]: 2025-11-25 09:32:57.013221958 +0000 UTC m=+0.073692360 container start 20d26b9df30e6dee712c4344490034e2145fc1c0e39d9882ab08daf2019cec00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:32:57 compute-1 bash[79627]: 20d26b9df30e6dee712c4344490034e2145fc1c0e39d9882ab08daf2019cec00
Nov 25 09:32:57 compute-1 podman[79627]: 2025-11-25 09:32:56.955271102 +0000 UTC m=+0.015741504 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:57 compute-1 systemd[1]: Started Ceph mon.compute-1 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:32:57 compute-1 ceph-mon[79643]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 09:32:57 compute-1 ceph-mon[79643]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pidfile_write: ignore empty --pid-file
Nov 25 09:32:57 compute-1 ceph-mon[79643]: load: jerasure load: lrc 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: RocksDB version: 7.9.2
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Git sha 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: DB SUMMARY
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: DB Session ID:  A0TFYK0291ZT1BVMXF7C
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: CURRENT file:  CURRENT
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 636 ; 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                         Options.error_if_exists: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                       Options.create_if_missing: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                                     Options.env: 0x5633d8d86c20
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                                Options.info_log: 0x5633d9fa3a20
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                              Options.statistics: (nil)
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                               Options.use_fsync: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                              Options.db_log_dir: 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                                 Options.wal_dir: 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                    Options.write_buffer_manager: 0x5633d9fa7900
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                  Options.unordered_write: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                               Options.row_cache: None
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                              Options.wal_filter: None
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.two_write_queues: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.wal_compression: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.atomic_flush: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 09:32:57 compute-1 sudo[79373]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.max_background_jobs: 2
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.max_background_compactions: -1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.max_subcompactions: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.max_total_wal_size: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                          Options.max_open_files: -1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:       Options.compaction_readahead_size: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Compression algorithms supported:
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         kZSTD supported: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         kXpressCompression supported: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         kBZip2Compression supported: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         kLZ4Compression supported: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         kZlibCompression supported: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         kSnappyCompression supported: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:           Options.merge_operator: 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633d9fa25c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633d9fc7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:        Options.write_buffer_size: 33554432
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:  Options.max_write_buffer_number: 2
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:          Options.compression: NoCompression
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e6723030-6d80-4936-b19c-e97b87ba28bf
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063177044088, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063177044931, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1773, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 648, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 526, "raw_average_value_size": 105, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063177045009, "job": 1, "event": "recovery_finished"}
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5633d9fc8e00
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: DB pointer 0x5633da0d2000
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:32:57 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.73 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.73 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.26 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.26 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633d9fc7350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 09:32:57 compute-1 ceph-mon[79643]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Nov 25 09:32:57 compute-1 ceph-mon[79643]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(???) e0 preinit fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).mds e1 new map
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).mds e1 print_map
                                           e1
                                           btime 2025-11-25T09:31:16:071954+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 2 up, 2 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 25 09:32:57 compute-1 ceph-mon[79643]: osdmap e6: 2 total, 0 up, 2 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: Adjusting osd_memory_target on compute-1 to  5248M
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: Adjusting osd_memory_target on compute-0 to 128.7M
Nov 25 09:32:57 compute-1 ceph-mon[79643]: Unable to set osd_memory_target on compute-0 to 134963200: error parsing value: Value '134963200' is below minimum 939524096
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/611149476' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: purged_snaps scrub starts
Nov 25 09:32:57 compute-1 ceph-mon[79643]: purged_snaps scrub ok
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Nov 25 09:32:57 compute-1 ceph-mon[79643]: osdmap e7: 2 total, 0 up, 2 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: purged_snaps scrub starts
Nov 25 09:32:57 compute-1 ceph-mon[79643]: purged_snaps scrub ok
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:57 compute-1 ceph-mon[79643]: OSD bench result of 21369.172304 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: OSD bench result of 23848.115549 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 09:32:57 compute-1 ceph-mon[79643]: osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021] boot
Nov 25 09:32:57 compute-1 ceph-mon[79643]: osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040] boot
Nov 25 09:32:57 compute-1 ceph-mon[79643]: osdmap e8: 2 total, 2 up, 2 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 25 09:32:57 compute-1 ceph-mon[79643]: osdmap e9: 2 total, 2 up, 2 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pgmap v32: 1 pgs: 1 unknown; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:57 compute-1 ceph-mon[79643]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 25 09:32:57 compute-1 ceph-mon[79643]: osdmap e10: 2 total, 2 up, 2 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mgrmap e9: compute-0.zcfgby(active, since 62s)
Nov 25 09:32:57 compute-1 ceph-mon[79643]: osdmap e11: 2 total, 2 up, 2 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pgmap v35: 1 pgs: 1 unknown; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:57 compute-1 ceph-mon[79643]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pgmap v36: 1 pgs: 1 unknown; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:32:57 compute-1 ceph-mon[79643]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:57 compute-1 ceph-mon[79643]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:32:57 compute-1 ceph-mon[79643]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: Deploying daemon mon.compute-2 on compute-2
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-0 calling monitor election
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-2 calling monitor election
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 25 09:32:57 compute-1 ceph-mon[79643]: monmap epoch 2
Nov 25 09:32:57 compute-1 ceph-mon[79643]: fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:32:57 compute-1 ceph-mon[79643]: last_changed 2025-11-25T09:32:50.766337+0000
Nov 25 09:32:57 compute-1 ceph-mon[79643]: created 2025-11-25T09:31:14.695764+0000
Nov 25 09:32:57 compute-1 ceph-mon[79643]: min_mon_release 19 (squid)
Nov 25 09:32:57 compute-1 ceph-mon[79643]: election_strategy: 1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 25 09:32:57 compute-1 ceph-mon[79643]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 25 09:32:57 compute-1 ceph-mon[79643]: fsmap 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: osdmap e11: 2 total, 2 up, 2 in
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mgrmap e9: compute-0.zcfgby(active, since 82s)
Nov 25 09:32:57 compute-1 ceph-mon[79643]: overall HEALTH_OK
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: Deploying daemon mon.compute-1 on compute-1
Nov 25 09:32:57 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:57 compute-1 ceph-mon[79643]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..8) refresh upgraded, format 0 -> 3
Nov 25 09:32:59 compute-1 ceph-mon[79643]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Nov 25 09:32:59 compute-1 ceph-mon[79643]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 25 09:32:59 compute-1 ceph-mon[79643]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 25 09:32:59 compute-1 ceph-mon[79643]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 09:33:02 compute-1 ceph-mon[79643]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 09:33:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 25 09:33:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Nov 25 09:33:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 09:33:02 compute-1 ceph-mon[79643]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC 7763 64-Core Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:04:00.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865372,os=Linux}
Nov 25 09:33:02 compute-1 ceph-mon[79643]: Deploying daemon mgr.compute-2.flybft on compute-2
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:33:02 compute-1 ceph-mon[79643]: mon.compute-0 calling monitor election
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:33:02 compute-1 ceph-mon[79643]: mon.compute-2 calling monitor election
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:02 compute-1 ceph-mon[79643]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:02 compute-1 ceph-mon[79643]: mon.compute-1 calling monitor election
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:02 compute-1 ceph-mon[79643]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:02 compute-1 ceph-mon[79643]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 25 09:33:02 compute-1 ceph-mon[79643]: monmap epoch 3
Nov 25 09:33:02 compute-1 ceph-mon[79643]: fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:02 compute-1 ceph-mon[79643]: last_changed 2025-11-25T09:32:57.086503+0000
Nov 25 09:33:02 compute-1 ceph-mon[79643]: created 2025-11-25T09:31:14.695764+0000
Nov 25 09:33:02 compute-1 ceph-mon[79643]: min_mon_release 19 (squid)
Nov 25 09:33:02 compute-1 ceph-mon[79643]: election_strategy: 1
Nov 25 09:33:02 compute-1 ceph-mon[79643]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 25 09:33:02 compute-1 ceph-mon[79643]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 25 09:33:02 compute-1 ceph-mon[79643]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Nov 25 09:33:02 compute-1 ceph-mon[79643]: fsmap 
Nov 25 09:33:02 compute-1 ceph-mon[79643]: osdmap e11: 2 total, 2 up, 2 in
Nov 25 09:33:02 compute-1 ceph-mon[79643]: mgrmap e9: compute-0.zcfgby(active, since 89s)
Nov 25 09:33:02 compute-1 ceph-mon[79643]: overall HEALTH_OK
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.plffrn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.plffrn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:33:02 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:02 compute-1 sudo[79682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:02 compute-1 sudo[79682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:02 compute-1 sudo[79682]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:02 compute-1 sudo[79707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:02 compute-1 sudo[79707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:02 compute-1 podman[79766]: 2025-11-25 09:33:02.48802521 +0000 UTC m=+0.027578535 container create 4c883068c24be8220fc458be4ff17f2e5eff0f6a65912b37d0dbabc47eca093b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_babbage, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:02 compute-1 systemd[1]: Started libpod-conmon-4c883068c24be8220fc458be4ff17f2e5eff0f6a65912b37d0dbabc47eca093b.scope.
Nov 25 09:33:02 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:33:02 compute-1 podman[79766]: 2025-11-25 09:33:02.546161696 +0000 UTC m=+0.085715042 container init 4c883068c24be8220fc458be4ff17f2e5eff0f6a65912b37d0dbabc47eca093b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_babbage, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:33:02 compute-1 podman[79766]: 2025-11-25 09:33:02.551282398 +0000 UTC m=+0.090835725 container start 4c883068c24be8220fc458be4ff17f2e5eff0f6a65912b37d0dbabc47eca093b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_babbage, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 25 09:33:02 compute-1 podman[79766]: 2025-11-25 09:33:02.552393683 +0000 UTC m=+0.091947010 container attach 4c883068c24be8220fc458be4ff17f2e5eff0f6a65912b37d0dbabc47eca093b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 25 09:33:02 compute-1 clever_babbage[79779]: 167 167
Nov 25 09:33:02 compute-1 systemd[1]: libpod-4c883068c24be8220fc458be4ff17f2e5eff0f6a65912b37d0dbabc47eca093b.scope: Deactivated successfully.
Nov 25 09:33:02 compute-1 conmon[79779]: conmon 4c883068c24be8220fc4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c883068c24be8220fc458be4ff17f2e5eff0f6a65912b37d0dbabc47eca093b.scope/container/memory.events
Nov 25 09:33:02 compute-1 podman[79766]: 2025-11-25 09:33:02.555168082 +0000 UTC m=+0.094721409 container died 4c883068c24be8220fc458be4ff17f2e5eff0f6a65912b37d0dbabc47eca093b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_babbage, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:33:02 compute-1 systemd[1]: var-lib-containers-storage-overlay-892801182ab01fabc42aec1146d189f17a69a080810e236afa66697e9a66ff0f-merged.mount: Deactivated successfully.
Nov 25 09:33:02 compute-1 podman[79766]: 2025-11-25 09:33:02.476435023 +0000 UTC m=+0.015988349 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:02 compute-1 podman[79766]: 2025-11-25 09:33:02.572959859 +0000 UTC m=+0.112513186 container remove 4c883068c24be8220fc458be4ff17f2e5eff0f6a65912b37d0dbabc47eca093b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 09:33:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 25 09:33:02 compute-1 systemd[1]: libpod-conmon-4c883068c24be8220fc458be4ff17f2e5eff0f6a65912b37d0dbabc47eca093b.scope: Deactivated successfully.
Nov 25 09:33:02 compute-1 systemd[1]: Reloading.
Nov 25 09:33:02 compute-1 systemd-sysv-generator[79818]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:33:02 compute-1 systemd-rc-local-generator[79815]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:33:02 compute-1 systemd[1]: Reloading.
Nov 25 09:33:02 compute-1 systemd-rc-local-generator[79856]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:33:02 compute-1 systemd-sysv-generator[79859]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:33:03 compute-1 systemd[1]: Starting Ceph mgr.compute-1.plffrn for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:33:03 compute-1 ceph-mon[79643]: Deploying daemon mgr.compute-1.plffrn on compute-1
Nov 25 09:33:03 compute-1 ceph-mon[79643]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:03 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:03 compute-1 podman[79912]: 2025-11-25 09:33:03.177966729 +0000 UTC m=+0.027214380 container create 8e1b9a1b4f0892613729863fba668070efccc754b7d606fae4019f1906e4fc02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:33:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eda10a1dee1720d44a34bab5e0089649c1155e1a5c817e6fb640bfa5e8c4127/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eda10a1dee1720d44a34bab5e0089649c1155e1a5c817e6fb640bfa5e8c4127/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eda10a1dee1720d44a34bab5e0089649c1155e1a5c817e6fb640bfa5e8c4127/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eda10a1dee1720d44a34bab5e0089649c1155e1a5c817e6fb640bfa5e8c4127/merged/var/lib/ceph/mgr/ceph-compute-1.plffrn supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:03 compute-1 podman[79912]: 2025-11-25 09:33:03.222399885 +0000 UTC m=+0.071647556 container init 8e1b9a1b4f0892613729863fba668070efccc754b7d606fae4019f1906e4fc02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:03 compute-1 podman[79912]: 2025-11-25 09:33:03.226810237 +0000 UTC m=+0.076057889 container start 8e1b9a1b4f0892613729863fba668070efccc754b7d606fae4019f1906e4fc02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 09:33:03 compute-1 bash[79912]: 8e1b9a1b4f0892613729863fba668070efccc754b7d606fae4019f1906e4fc02
Nov 25 09:33:03 compute-1 podman[79912]: 2025-11-25 09:33:03.167789424 +0000 UTC m=+0.017037076 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:03 compute-1 systemd[1]: Started Ceph mgr.compute-1.plffrn for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:33:03 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 25 09:33:03 compute-1 sudo[79707]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:03 compute-1 ceph-mgr[79928]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 09:33:03 compute-1 ceph-mgr[79928]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 09:33:03 compute-1 ceph-mgr[79928]: pidfile_write: ignore empty --pid-file
Nov 25 09:33:03 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 25 09:33:03 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'alerts'
Nov 25 09:33:03 compute-1 ceph-mgr[79928]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:33:03 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'balancer'
Nov 25 09:33:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:03.374+0000 7f807f7ad140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:33:03 compute-1 ceph-mgr[79928]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:33:03 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'cephadm'
Nov 25 09:33:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:03.444+0000 7f807f7ad140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:33:04 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'crash'
Nov 25 09:33:04 compute-1 ceph-mgr[79928]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:33:04 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'dashboard'
Nov 25 09:33:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:04.116+0000 7f807f7ad140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:33:04 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:04 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:04 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:04 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:04 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 09:33:04 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 09:33:04 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:04 compute-1 ceph-mon[79643]: Deploying daemon crash.compute-2 on compute-2
Nov 25 09:33:04 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2646714900' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 09:33:04 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'devicehealth'
Nov 25 09:33:04 compute-1 ceph-mgr[79928]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:33:04 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 09:33:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:04.654+0000 7f807f7ad140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:33:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 09:33:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 09:33:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]:   from numpy import show_config as show_numpy_config
Nov 25 09:33:04 compute-1 ceph-mgr[79928]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:33:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:04.799+0000 7f807f7ad140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:33:04 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'influx'
Nov 25 09:33:04 compute-1 ceph-mgr[79928]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:33:04 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'insights'
Nov 25 09:33:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:04.860+0000 7f807f7ad140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:33:04 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'iostat'
Nov 25 09:33:04 compute-1 ceph-mgr[79928]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:33:04 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'k8sevents'
Nov 25 09:33:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:04.979+0000 7f807f7ad140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:33:05 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e12 e12: 2 total, 2 up, 2 in
Nov 25 09:33:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2513898650' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:05 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:05 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:05 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:05 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:05 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:33:05 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:05 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:05 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:05 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:05 compute-1 ceph-mon[79643]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:05 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'localpool'
Nov 25 09:33:05 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 09:33:05 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 25 09:33:05 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 25 09:33:05 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'mirroring'
Nov 25 09:33:05 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'nfs'
Nov 25 09:33:05 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e13 e13: 3 total, 2 up, 3 in
Nov 25 09:33:05 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 25 09:33:05 compute-1 ceph-mgr[79928]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:33:05 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'orchestrator'
Nov 25 09:33:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:05.838+0000 7f807f7ad140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:33:05 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 12 pg[2.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:33:06 compute-1 ceph-mgr[79928]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:06 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 09:33:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:06.025+0000 7f807f7ad140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:06 compute-1 ceph-mgr[79928]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:33:06 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'osd_support'
Nov 25 09:33:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:06.093+0000 7f807f7ad140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:33:06 compute-1 ceph-mgr[79928]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:33:06 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 09:33:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:06.154+0000 7f807f7ad140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:33:06 compute-1 ceph-mgr[79928]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:33:06 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'progress'
Nov 25 09:33:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:06.224+0000 7f807f7ad140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:33:06 compute-1 ceph-mgr[79928]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:33:06 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'prometheus'
Nov 25 09:33:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:06.286+0000 7f807f7ad140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:33:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2513898650' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:06 compute-1 ceph-mon[79643]: osdmap e12: 2 total, 2 up, 2 in
Nov 25 09:33:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/4258346990' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "836b14f9-a1aa-4fbf-bd6d-42374c72028e"}]: dispatch
Nov 25 09:33:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/4258346990' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "836b14f9-a1aa-4fbf-bd6d-42374c72028e"}]': finished
Nov 25 09:33:06 compute-1 ceph-mon[79643]: osdmap e13: 3 total, 2 up, 3 in
Nov 25 09:33:06 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3191460124' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/4042166828' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 09:33:06 compute-1 ceph-mgr[79928]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:33:06 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'rbd_support'
Nov 25 09:33:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:06.585+0000 7f807f7ad140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:33:06 compute-1 ceph-mgr[79928]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:33:06 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'restful'
Nov 25 09:33:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:06.670+0000 7f807f7ad140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:33:06 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e14 e14: 3 total, 2 up, 3 in
Nov 25 09:33:06 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 14 pg[2.0( empty local-lis/les=12/14 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:33:06 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'rgw'
Nov 25 09:33:07 compute-1 ceph-mgr[79928]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:33:07 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'rook'
Nov 25 09:33:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:07.047+0000 7f807f7ad140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:33:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e14 _set_new_cache_sizes cache_size:1019935741 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:07 compute-1 ceph-mgr[79928]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:33:07 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'selftest'
Nov 25 09:33:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:07.544+0000 7f807f7ad140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:33:07 compute-1 ceph-mgr[79928]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:33:07 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'snap_schedule'
Nov 25 09:33:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:07.606+0000 7f807f7ad140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:33:07 compute-1 ceph-mgr[79928]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:33:07 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'stats'
Nov 25 09:33:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:07.677+0000 7f807f7ad140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:33:07 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'status'
Nov 25 09:33:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e15 e15: 3 total, 2 up, 3 in
Nov 25 09:33:07 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3191460124' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:07 compute-1 ceph-mon[79643]: osdmap e14: 3 total, 2 up, 3 in
Nov 25 09:33:07 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:07 compute-1 ceph-mon[79643]: pgmap v54: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:07 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:07 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4269819066' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:07 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:07 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:07 compute-1 ceph-mgr[79928]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:33:07 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'telegraf'
Nov 25 09:33:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:07.807+0000 7f807f7ad140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:33:07 compute-1 ceph-mgr[79928]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:33:07 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'telemetry'
Nov 25 09:33:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:07.869+0000 7f807f7ad140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:33:08 compute-1 ceph-mgr[79928]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:33:08 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 09:33:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:08.003+0000 7f807f7ad140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:33:08 compute-1 ceph-mgr[79928]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:08 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'volumes'
Nov 25 09:33:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:08.196+0000 7f807f7ad140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:08 compute-1 ceph-mgr[79928]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:33:08 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'zabbix'
Nov 25 09:33:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:08.427+0000 7f807f7ad140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:33:08 compute-1 ceph-mgr[79928]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:33:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:08.489+0000 7f807f7ad140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:33:08 compute-1 ceph-mgr[79928]: ms_deliver_dispatch: unhandled message 0x55d94d11ed00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 25 09:33:08 compute-1 ceph-mon[79643]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 09:33:08 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4269819066' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:08 compute-1 ceph-mon[79643]: osdmap e15: 3 total, 2 up, 3 in
Nov 25 09:33:08 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:08 compute-1 ceph-mon[79643]: Standby manager daemon compute-2.flybft started
Nov 25 09:33:08 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2992947115' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:08 compute-1 ceph-mon[79643]: Standby manager daemon compute-1.plffrn started
Nov 25 09:33:08 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e16 e16: 3 total, 2 up, 3 in
Nov 25 09:33:09 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e17 e17: 3 total, 2 up, 3 in
Nov 25 09:33:09 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2992947115' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:09 compute-1 ceph-mon[79643]: osdmap e16: 3 total, 2 up, 3 in
Nov 25 09:33:09 compute-1 ceph-mon[79643]: mgrmap e10: compute-0.zcfgby(active, since 95s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:09 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:09 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"}]: dispatch
Nov 25 09:33:09 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"}]: dispatch
Nov 25 09:33:09 compute-1 ceph-mon[79643]: pgmap v57: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:09 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2175287734' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:10 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e18 e18: 3 total, 2 up, 3 in
Nov 25 09:33:10 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 18 pg[7.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:33:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2175287734' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:10 compute-1 ceph-mon[79643]: osdmap e17: 3 total, 2 up, 3 in
Nov 25 09:33:10 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:10 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 25 09:33:10 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3148501607' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3148501607' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:11 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e19 e19: 3 total, 2 up, 3 in
Nov 25 09:33:11 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 19 pg[7.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:33:11 compute-1 ceph-mon[79643]: Deploying daemon osd.2 on compute-2
Nov 25 09:33:11 compute-1 ceph-mon[79643]: osdmap e18: 3 total, 2 up, 3 in
Nov 25 09:33:11 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:11 compute-1 ceph-mon[79643]: pgmap v60: 7 pgs: 3 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2930438515' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 25 09:33:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e19 _set_new_cache_sizes cache_size:1020053175 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2930438515' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 25 09:33:12 compute-1 ceph-mon[79643]: osdmap e19: 3 total, 2 up, 3 in
Nov 25 09:33:12 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1942627046' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 25 09:33:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e20 e20: 3 total, 2 up, 3 in
Nov 25 09:33:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1942627046' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 25 09:33:13 compute-1 ceph-mon[79643]: osdmap e20: 3 total, 2 up, 3 in
Nov 25 09:33:13 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:13 compute-1 ceph-mon[79643]: pgmap v63: 7 pgs: 3 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:13 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:13 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3230090525' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 25 09:33:13 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e21 e21: 3 total, 2 up, 3 in
Nov 25 09:33:14 compute-1 sudo[79960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:33:14 compute-1 sudo[79960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:14 compute-1 sudo[79960]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:14 compute-1 sudo[79985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:14 compute-1 sudo[79985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:14 compute-1 sudo[79985]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:14 compute-1 sudo[80010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:33:14 compute-1 sudo[80010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:14 compute-1 podman[80093]: 2025-11-25 09:33:14.614258566 +0000 UTC m=+0.039166639 container exec 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:14 compute-1 podman[80093]: 2025-11-25 09:33:14.692720446 +0000 UTC m=+0.117628519 container exec_died 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 25 09:33:14 compute-1 ceph-mon[79643]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 09:33:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3230090525' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 25 09:33:14 compute-1 ceph-mon[79643]: osdmap e21: 3 total, 2 up, 3 in
Nov 25 09:33:14 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:14 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:14 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/146186615' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 25 09:33:14 compute-1 sudo[80010]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:14 compute-1 sudo[80165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:14 compute-1 sudo[80165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:14 compute-1 sudo[80165]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:15 compute-1 sudo[80190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:33:15 compute-1 sudo[80190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:15 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e22 e22: 3 total, 2 up, 3 in
Nov 25 09:33:15 compute-1 sudo[80190]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:15 compute-1 ceph-mon[79643]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:15 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:15 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:15 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:15 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/146186615' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 25 09:33:15 compute-1 ceph-mon[79643]: osdmap e22: 3 total, 2 up, 3 in
Nov 25 09:33:15 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:15 compute-1 ceph-mon[79643]: from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 25 09:33:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/370321277' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 25 09:33:16 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e23 e23: 3 total, 2 up, 3 in
Nov 25 09:33:16 compute-1 sudo[80243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:33:16 compute-1 sudo[80243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80243]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-1 sudo[80268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:33:16 compute-1 sudo[80268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80268]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-1 sudo[80293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:16 compute-1 sudo[80293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80293]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-1 sudo[80318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:16 compute-1 sudo[80318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80318]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-1 sudo[80343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:16 compute-1 sudo[80343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80343]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-1 sudo[80391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:16 compute-1 sudo[80391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80391]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-1 sudo[80416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:16 compute-1 sudo[80416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80416]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-1 sudo[80441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 25 09:33:16 compute-1 sudo[80441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80441]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-1 sudo[80466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:16 compute-1 sudo[80466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80466]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-1 sudo[80491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:16 compute-1 sudo[80491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80491]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-1 sudo[80516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:16 compute-1 sudo[80516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80516]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-1 sudo[80541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:16 compute-1 sudo[80541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80541]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-1 sudo[80566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:16 compute-1 sudo[80566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80566]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-1 sudo[80614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:16 compute-1 sudo[80614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80614]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-1 sudo[80639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:16 compute-1 sudo[80639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80639]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-1 sudo[80664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:16 compute-1 sudo[80664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-1 sudo[80664]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:17 compute-1 ceph-mon[79643]: Adjusting osd_memory_target on compute-2 to 128.7M
Nov 25 09:33:17 compute-1 ceph-mon[79643]: Unable to set osd_memory_target on compute-2 to 134971801: error parsing value: Value '134971801' is below minimum 939524096
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:33:17 compute-1 ceph-mon[79643]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:33:17 compute-1 ceph-mon[79643]: Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:33:17 compute-1 ceph-mon[79643]: Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/370321277' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 25 09:33:17 compute-1 ceph-mon[79643]: osdmap e23: 3 total, 2 up, 3 in
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:17 compute-1 ceph-mon[79643]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:17 compute-1 ceph-mon[79643]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:17 compute-1 ceph-mon[79643]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1194115357' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:17 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:17 compute-1 ceph-mon[79643]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e24 e24: 3 total, 2 up, 3 in
Nov 25 09:33:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e24 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:18 compute-1 ceph-mon[79643]: from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Nov 25 09:33:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1194115357' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 25 09:33:18 compute-1 ceph-mon[79643]: osdmap e24: 3 total, 2 up, 3 in
Nov 25 09:33:18 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:18 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:19 compute-1 ceph-mon[79643]: purged_snaps scrub starts
Nov 25 09:33:19 compute-1 ceph-mon[79643]: purged_snaps scrub ok
Nov 25 09:33:19 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:19 compute-1 ceph-mon[79643]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:19 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e25 e25: 3 total, 3 up, 3 in
Nov 25 09:33:19 compute-1 sudo[80689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:33:19 compute-1 sudo[80689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:19 compute-1 sudo[80689]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:20 compute-1 ceph-mon[79643]: OSD bench result of 23457.770996 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 09:33:20 compute-1 ceph-mon[79643]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 09:33:20 compute-1 ceph-mon[79643]: Cluster is now healthy
Nov 25 09:33:20 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:20 compute-1 ceph-mon[79643]: osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192] boot
Nov 25 09:33:20 compute-1 ceph-mon[79643]: osdmap e25: 3 total, 3 up, 3 in
Nov 25 09:33:20 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2793311854' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 09:33:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2793311854' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 09:33:20 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:20 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:20 compute-1 ceph-mon[79643]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 25 09:33:20 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:33:20 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:33:20 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:20 compute-1 ceph-mon[79643]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 09:33:20 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e26 e26: 3 total, 3 up, 3 in
Nov 25 09:33:21 compute-1 ceph-mon[79643]: osdmap e26: 3 total, 3 up, 3 in
Nov 25 09:33:21 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:21 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:21 compute-1 ceph-mon[79643]: Reconfiguring mgr.compute-0.zcfgby (monmap changed)...
Nov 25 09:33:21 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zcfgby", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 09:33:21 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:33:21 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:21 compute-1 ceph-mon[79643]: Reconfiguring daemon mgr.compute-0.zcfgby on compute-0
Nov 25 09:33:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/736770130' entity='client.admin' 
Nov 25 09:33:21 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:21 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:21 compute-1 ceph-mon[79643]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 25 09:33:21 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 09:33:21 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:21 compute-1 ceph-mon[79643]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 25 09:33:21 compute-1 ceph-mon[79643]: pgmap v73: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:21 compute-1 sudo[80714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:21 compute-1 sudo[80714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:21 compute-1 sudo[80714]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:21 compute-1 sudo[80739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:21 compute-1 sudo[80739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:22 compute-1 podman[80778]: 2025-11-25 09:33:22.02146566 +0000 UTC m=+0.022867888 container create 3955e3fb343b5ef6998bfae19cec0b74b6f89410bdc2eb800bc60291663cf097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:22 compute-1 systemd[1]: Started libpod-conmon-3955e3fb343b5ef6998bfae19cec0b74b6f89410bdc2eb800bc60291663cf097.scope.
Nov 25 09:33:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:22 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:33:22 compute-1 podman[80778]: 2025-11-25 09:33:22.070889462 +0000 UTC m=+0.072291691 container init 3955e3fb343b5ef6998bfae19cec0b74b6f89410bdc2eb800bc60291663cf097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ganguly, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:22 compute-1 podman[80778]: 2025-11-25 09:33:22.075022293 +0000 UTC m=+0.076424531 container start 3955e3fb343b5ef6998bfae19cec0b74b6f89410bdc2eb800bc60291663cf097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ganguly, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 25 09:33:22 compute-1 podman[80778]: 2025-11-25 09:33:22.076307115 +0000 UTC m=+0.077709363 container attach 3955e3fb343b5ef6998bfae19cec0b74b6f89410bdc2eb800bc60291663cf097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 25 09:33:22 compute-1 cranky_ganguly[80792]: 167 167
Nov 25 09:33:22 compute-1 systemd[1]: libpod-3955e3fb343b5ef6998bfae19cec0b74b6f89410bdc2eb800bc60291663cf097.scope: Deactivated successfully.
Nov 25 09:33:22 compute-1 podman[80778]: 2025-11-25 09:33:22.077781564 +0000 UTC m=+0.079183791 container died 3955e3fb343b5ef6998bfae19cec0b74b6f89410bdc2eb800bc60291663cf097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ganguly, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 09:33:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-97ac46cb54a19533fbb9803821d1164dc0e3cad78cfe7c0a258aa17408add10b-merged.mount: Deactivated successfully.
Nov 25 09:33:22 compute-1 podman[80778]: 2025-11-25 09:33:22.093850473 +0000 UTC m=+0.095252701 container remove 3955e3fb343b5ef6998bfae19cec0b74b6f89410bdc2eb800bc60291663cf097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ganguly, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 25 09:33:22 compute-1 podman[80778]: 2025-11-25 09:33:22.011856015 +0000 UTC m=+0.013258263 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:22 compute-1 ceph-mon[79643]: from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:22 compute-1 ceph-mon[79643]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:22 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-1 ceph-mon[79643]: Saving service ingress.rgw.default spec with placement count:2
Nov 25 09:33:22 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-1 ceph-mon[79643]: Reconfiguring osd.1 (monmap changed)...
Nov 25 09:33:22 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 09:33:22 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:22 compute-1 ceph-mon[79643]: Reconfiguring daemon osd.1 on compute-0
Nov 25 09:33:22 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-1 ceph-mon[79643]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 25 09:33:22 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 09:33:22 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:22 compute-1 ceph-mon[79643]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 25 09:33:22 compute-1 systemd[1]: libpod-conmon-3955e3fb343b5ef6998bfae19cec0b74b6f89410bdc2eb800bc60291663cf097.scope: Deactivated successfully.
Nov 25 09:33:22 compute-1 sudo[80739]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:22 compute-1 sudo[80806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:22 compute-1 sudo[80806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:22 compute-1 sudo[80806]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:22 compute-1 sudo[80831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:22 compute-1 sudo[80831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:22 compute-1 podman[80870]: 2025-11-25 09:33:22.42442934 +0000 UTC m=+0.023849528 container create 166af4d32a6fabb0a7d455b53dde4926d33352bf3b3b6d7c672b135b6b70ed26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_shockley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:22 compute-1 systemd[1]: Started libpod-conmon-166af4d32a6fabb0a7d455b53dde4926d33352bf3b3b6d7c672b135b6b70ed26.scope.
Nov 25 09:33:22 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:33:22 compute-1 podman[80870]: 2025-11-25 09:33:22.467080837 +0000 UTC m=+0.066501025 container init 166af4d32a6fabb0a7d455b53dde4926d33352bf3b3b6d7c672b135b6b70ed26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_shockley, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:22 compute-1 podman[80870]: 2025-11-25 09:33:22.47089687 +0000 UTC m=+0.070317059 container start 166af4d32a6fabb0a7d455b53dde4926d33352bf3b3b6d7c672b135b6b70ed26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 25 09:33:22 compute-1 podman[80870]: 2025-11-25 09:33:22.471959042 +0000 UTC m=+0.071379230 container attach 166af4d32a6fabb0a7d455b53dde4926d33352bf3b3b6d7c672b135b6b70ed26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Nov 25 09:33:22 compute-1 nervous_shockley[80884]: 167 167
Nov 25 09:33:22 compute-1 systemd[1]: libpod-166af4d32a6fabb0a7d455b53dde4926d33352bf3b3b6d7c672b135b6b70ed26.scope: Deactivated successfully.
Nov 25 09:33:22 compute-1 podman[80870]: 2025-11-25 09:33:22.473190143 +0000 UTC m=+0.072610330 container died 166af4d32a6fabb0a7d455b53dde4926d33352bf3b3b6d7c672b135b6b70ed26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 09:33:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-72f42b1081b8b1a04c3d7307a3719f9e2dad2beba0804f8001937f45d52a24fd-merged.mount: Deactivated successfully.
Nov 25 09:33:22 compute-1 podman[80870]: 2025-11-25 09:33:22.487914107 +0000 UTC m=+0.087334285 container remove 166af4d32a6fabb0a7d455b53dde4926d33352bf3b3b6d7c672b135b6b70ed26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:22 compute-1 podman[80870]: 2025-11-25 09:33:22.415014182 +0000 UTC m=+0.014434389 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:22 compute-1 systemd[1]: libpod-conmon-166af4d32a6fabb0a7d455b53dde4926d33352bf3b3b6d7c672b135b6b70ed26.scope: Deactivated successfully.
Nov 25 09:33:22 compute-1 sudo[80831]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:22 compute-1 sudo[80906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:22 compute-1 sudo[80906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:22 compute-1 sudo[80906]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:22 compute-1 sudo[80931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:22 compute-1 sudo[80931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:22 compute-1 podman[80969]: 2025-11-25 09:33:22.887396135 +0000 UTC m=+0.023114442 container create eb244effde0bff102c6da65b2dadd2d977b3918fd771a9f699291fa3cd66d15a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_yalow, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:22 compute-1 systemd[1]: Started libpod-conmon-eb244effde0bff102c6da65b2dadd2d977b3918fd771a9f699291fa3cd66d15a.scope.
Nov 25 09:33:22 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:33:22 compute-1 podman[80969]: 2025-11-25 09:33:22.93857208 +0000 UTC m=+0.074290406 container init eb244effde0bff102c6da65b2dadd2d977b3918fd771a9f699291fa3cd66d15a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_yalow, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:33:22 compute-1 podman[80969]: 2025-11-25 09:33:22.94229577 +0000 UTC m=+0.078014086 container start eb244effde0bff102c6da65b2dadd2d977b3918fd771a9f699291fa3cd66d15a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_yalow, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Nov 25 09:33:22 compute-1 podman[80969]: 2025-11-25 09:33:22.943307527 +0000 UTC m=+0.079025843 container attach eb244effde0bff102c6da65b2dadd2d977b3918fd771a9f699291fa3cd66d15a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_yalow, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:22 compute-1 serene_yalow[80982]: 167 167
Nov 25 09:33:22 compute-1 systemd[1]: libpod-eb244effde0bff102c6da65b2dadd2d977b3918fd771a9f699291fa3cd66d15a.scope: Deactivated successfully.
Nov 25 09:33:22 compute-1 podman[80969]: 2025-11-25 09:33:22.944880482 +0000 UTC m=+0.080598798 container died eb244effde0bff102c6da65b2dadd2d977b3918fd771a9f699291fa3cd66d15a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_yalow, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 09:33:22 compute-1 podman[80969]: 2025-11-25 09:33:22.960281162 +0000 UTC m=+0.095999478 container remove eb244effde0bff102c6da65b2dadd2d977b3918fd771a9f699291fa3cd66d15a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 09:33:22 compute-1 podman[80969]: 2025-11-25 09:33:22.877972852 +0000 UTC m=+0.013691188 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:22 compute-1 systemd[1]: libpod-conmon-eb244effde0bff102c6da65b2dadd2d977b3918fd771a9f699291fa3cd66d15a.scope: Deactivated successfully.
Nov 25 09:33:22 compute-1 sudo[80931]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:23 compute-1 systemd[1]: var-lib-containers-storage-overlay-229365fec8eba1f032c4821f1f42fa7e4d9420f68be05905db3eb0637f853112-merged.mount: Deactivated successfully.
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-1 ceph-mon[79643]: Reconfiguring osd.0 (monmap changed)...
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:23 compute-1 ceph-mon[79643]: Reconfiguring daemon osd.0 on compute-1
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-1 ceph-mon[79643]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:23 compute-1 ceph-mon[79643]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='client.14325 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:23 compute-1 ceph-mon[79643]: Saving service node-exporter spec with placement *
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-1 ceph-mon[79643]: Saving service grafana spec with placement compute-0;count:1
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-1 ceph-mon[79643]: Saving service prometheus spec with placement compute-0;count:1
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-1 ceph-mon[79643]: Saving service alertmanager spec with placement compute-0;count:1
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-1 ceph-mon[79643]: pgmap v74: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:33:23 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:23 compute-1 sudo[80997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:23 compute-1 sudo[80997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:23 compute-1 sudo[80997]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:23 compute-1 sudo[81022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:33:23 compute-1 sudo[81022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:24 compute-1 podman[81104]: 2025-11-25 09:33:24.322498065 +0000 UTC m=+0.033713683 container exec 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:24 compute-1 podman[81104]: 2025-11-25 09:33:24.402624469 +0000 UTC m=+0.113840087 container exec_died 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:33:24 compute-1 ceph-mon[79643]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 25 09:33:24 compute-1 ceph-mon[79643]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 25 09:33:24 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:24 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:24 compute-1 ceph-mon[79643]: Reconfiguring mgr.compute-2.flybft (monmap changed)...
Nov 25 09:33:24 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.flybft", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 09:33:24 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:33:24 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:24 compute-1 ceph-mon[79643]: Reconfiguring daemon mgr.compute-2.flybft on compute-2
Nov 25 09:33:24 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3510657336' entity='client.admin' 
Nov 25 09:33:24 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:24 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:24 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2358325226' entity='client.admin' 
Nov 25 09:33:24 compute-1 sudo[81022]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:25 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:25 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:25 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:25 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:25 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:25 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:33:25 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:25 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:33:25 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:25 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:25 compute-1 ceph-mon[79643]: pgmap v75: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:25 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3373333714' entity='client.admin' 
Nov 25 09:33:26 compute-1 sudo[81198]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acszxiqngaacnggjxfsghkhdzyilvocr ; /usr/bin/python3'
Nov 25 09:33:26 compute-1 sudo[81198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:26 compute-1 python3[81200]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:26 compute-1 sudo[81198]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:27 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3012149904' entity='client.admin' 
Nov 25 09:33:27 compute-1 ceph-mon[79643]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2675069756' entity='client.admin' 
Nov 25 09:33:28 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:28 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:28 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oidoiv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:33:28 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oidoiv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:33:28 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:28 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:28 compute-1 ceph-mon[79643]: Deploying daemon rgw.rgw.compute-2.oidoiv on compute-2
Nov 25 09:33:28 compute-1 sudo[81210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:28 compute-1 sudo[81210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:28 compute-1 sudo[81210]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:29 compute-1 sudo[81235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:29 compute-1 sudo[81235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:29 compute-1 podman[81293]: 2025-11-25 09:33:29.308277782 +0000 UTC m=+0.027271944 container create b43412ee35018d04c74aa7bfec9080bc9ebc07cc91424084407a302fa43cae6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:29 compute-1 systemd[1]: Started libpod-conmon-b43412ee35018d04c74aa7bfec9080bc9ebc07cc91424084407a302fa43cae6a.scope.
Nov 25 09:33:29 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:33:29 compute-1 podman[81293]: 2025-11-25 09:33:29.358941805 +0000 UTC m=+0.077935977 container init b43412ee35018d04c74aa7bfec9080bc9ebc07cc91424084407a302fa43cae6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 25 09:33:29 compute-1 podman[81293]: 2025-11-25 09:33:29.363082008 +0000 UTC m=+0.082076170 container start b43412ee35018d04c74aa7bfec9080bc9ebc07cc91424084407a302fa43cae6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:29 compute-1 podman[81293]: 2025-11-25 09:33:29.364285325 +0000 UTC m=+0.083279487 container attach b43412ee35018d04c74aa7bfec9080bc9ebc07cc91424084407a302fa43cae6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:33:29 compute-1 quizzical_villani[81305]: 167 167
Nov 25 09:33:29 compute-1 systemd[1]: libpod-b43412ee35018d04c74aa7bfec9080bc9ebc07cc91424084407a302fa43cae6a.scope: Deactivated successfully.
Nov 25 09:33:29 compute-1 podman[81293]: 2025-11-25 09:33:29.366763755 +0000 UTC m=+0.085757917 container died b43412ee35018d04c74aa7bfec9080bc9ebc07cc91424084407a302fa43cae6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Nov 25 09:33:29 compute-1 systemd[1]: var-lib-containers-storage-overlay-7f6f1e8742f6b8b2771bdeeb49069fa41b5cb90a11e24cdbd14ebd350ecedd24-merged.mount: Deactivated successfully.
Nov 25 09:33:29 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1272850759' entity='client.admin' 
Nov 25 09:33:29 compute-1 ceph-mon[79643]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:29 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:29 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:29 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:29 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.lyczeh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:33:29 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.lyczeh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:33:29 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:29 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:29 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4100665242' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 25 09:33:29 compute-1 podman[81293]: 2025-11-25 09:33:29.297852235 +0000 UTC m=+0.016846398 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:29 compute-1 podman[81293]: 2025-11-25 09:33:29.395709913 +0000 UTC m=+0.114704075 container remove b43412ee35018d04c74aa7bfec9080bc9ebc07cc91424084407a302fa43cae6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 09:33:29 compute-1 systemd[1]: libpod-conmon-b43412ee35018d04c74aa7bfec9080bc9ebc07cc91424084407a302fa43cae6a.scope: Deactivated successfully.
Nov 25 09:33:29 compute-1 systemd[1]: Reloading.
Nov 25 09:33:29 compute-1 systemd-sysv-generator[81350]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:33:29 compute-1 systemd-rc-local-generator[81345]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:33:29 compute-1 systemd[1]: Reloading.
Nov 25 09:33:29 compute-1 systemd-rc-local-generator[81382]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:33:29 compute-1 systemd-sysv-generator[81387]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:33:29 compute-1 systemd[1]: Starting Ceph rgw.rgw.compute-1.lyczeh for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:33:29 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e27 e27: 3 total, 3 up, 3 in
Nov 25 09:33:30 compute-1 podman[81434]: 2025-11-25 09:33:30.00422016 +0000 UTC m=+0.025988315 container create 811e4dee2065d724f4020d164443501c112e72af369dbb129babc21e62ea8fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-rgw-rgw-compute-1-lyczeh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68180593a721d961a52f91d9a06ca7c84496ed3b744558b8d04c29530687f924/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68180593a721d961a52f91d9a06ca7c84496ed3b744558b8d04c29530687f924/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68180593a721d961a52f91d9a06ca7c84496ed3b744558b8d04c29530687f924/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68180593a721d961a52f91d9a06ca7c84496ed3b744558b8d04c29530687f924/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.lyczeh supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:30 compute-1 podman[81434]: 2025-11-25 09:33:30.046796663 +0000 UTC m=+0.068564819 container init 811e4dee2065d724f4020d164443501c112e72af369dbb129babc21e62ea8fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-rgw-rgw-compute-1-lyczeh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:30 compute-1 podman[81434]: 2025-11-25 09:33:30.050771694 +0000 UTC m=+0.072539849 container start 811e4dee2065d724f4020d164443501c112e72af369dbb129babc21e62ea8fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-rgw-rgw-compute-1-lyczeh, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:30 compute-1 bash[81434]: 811e4dee2065d724f4020d164443501c112e72af369dbb129babc21e62ea8fea
Nov 25 09:33:30 compute-1 podman[81434]: 2025-11-25 09:33:29.993451928 +0000 UTC m=+0.015220103 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:30 compute-1 systemd[1]: Started Ceph rgw.rgw.compute-1.lyczeh for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:33:30 compute-1 sudo[81235]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:30 compute-1 radosgw[81450]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 25 09:33:30 compute-1 radosgw[81450]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Nov 25 09:33:30 compute-1 radosgw[81450]: framework: beast
Nov 25 09:33:30 compute-1 radosgw[81450]: framework conf key: endpoint, val: 192.168.122.101:8082
Nov 25 09:33:30 compute-1 radosgw[81450]: init_numa not setting numa affinity
Nov 25 09:33:30 compute-1 ceph-mon[79643]: Deploying daemon rgw.rgw.compute-1.lyczeh on compute-1
Nov 25 09:33:30 compute-1 ceph-mon[79643]: osdmap e27: 3 total, 3 up, 3 in
Nov 25 09:33:30 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 25 09:33:30 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/90661545' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 25 09:33:30 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4100665242' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 25 09:33:30 compute-1 ceph-mon[79643]: mgrmap e11: compute-0.zcfgby(active, since 117s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:30 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:30 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:30 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:30 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.uosdwi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:33:30 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.uosdwi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:33:30 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:30 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:30 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e28 e28: 3 total, 3 up, 3 in
Nov 25 09:33:30 compute-1 radosgw[81450]: rgw main: failed to create zone with (17) File exists
Nov 25 09:33:30 compute-1 radosgw[81450]: rgw main: failed to create zonegroup with (17) File exists
Nov 25 09:33:31 compute-1 ceph-mon[79643]: Deploying daemon rgw.rgw.compute-0.uosdwi on compute-0
Nov 25 09:33:31 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1685845904' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 25 09:33:31 compute-1 ceph-mon[79643]: pgmap v79: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:31 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 25 09:33:31 compute-1 ceph-mon[79643]: osdmap e28: 3 total, 3 up, 3 in
Nov 25 09:33:31 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:31 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:31 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:31 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:31 compute-1 ceph-mon[79643]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr respawn  1: '-n'
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr respawn  2: 'mgr.compute-1.plffrn'
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr respawn  3: '-f'
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr respawn  4: '--setuser'
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr respawn  5: 'ceph'
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr respawn  6: '--setgroup'
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr respawn  7: 'ceph'
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr respawn  8: '--default-log-to-file=false'
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr respawn  9: '--default-log-to-journald=true'
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 25 09:33:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: ignoring --setuser ceph since I am not root
Nov 25 09:33:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: ignoring --setgroup ceph since I am not root
Nov 25 09:33:31 compute-1 sshd-session[72797]: Connection closed by 192.168.122.100 port 48194
Nov 25 09:33:31 compute-1 sshd-session[72741]: Connection closed by 192.168.122.100 port 48176
Nov 25 09:33:31 compute-1 sshd-session[72654]: Connection closed by 192.168.122.100 port 48144
Nov 25 09:33:31 compute-1 sshd-session[72625]: Connection closed by 192.168.122.100 port 48128
Nov 25 09:33:31 compute-1 sshd-session[72768]: Connection closed by 192.168.122.100 port 48184
Nov 25 09:33:31 compute-1 sshd-session[72538]: Connection closed by 192.168.122.100 port 45536
Nov 25 09:33:31 compute-1 sshd-session[72596]: Connection closed by 192.168.122.100 port 48118
Nov 25 09:33:31 compute-1 sshd-session[72567]: Connection closed by 192.168.122.100 port 45550
Nov 25 09:33:31 compute-1 sshd-session[72683]: Connection closed by 192.168.122.100 port 48146
Nov 25 09:33:31 compute-1 sshd-session[72712]: Connection closed by 192.168.122.100 port 48160
Nov 25 09:33:31 compute-1 sshd-session[72509]: Connection closed by 192.168.122.100 port 45534
Nov 25 09:33:31 compute-1 sshd-session[72507]: Connection closed by 192.168.122.100 port 45532
Nov 25 09:33:31 compute-1 sshd-session[72794]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-1 systemd[1]: session-19.scope: Deactivated successfully.
Nov 25 09:33:31 compute-1 sshd-session[72485]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-1 systemd[1]: session-21.scope: Deactivated successfully.
Nov 25 09:33:31 compute-1 sshd-session[72593]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-1 systemd[1]: session-30.scope: Deactivated successfully.
Nov 25 09:33:31 compute-1 sshd-session[72535]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-1 systemd[1]: session-23.scope: Deactivated successfully.
Nov 25 09:33:31 compute-1 sshd-session[72503]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-1 systemd[1]: session-28.scope: Deactivated successfully.
Nov 25 09:33:31 compute-1 sshd-session[72709]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-1 systemd[1]: session-27.scope: Deactivated successfully.
Nov 25 09:33:31 compute-1 sshd-session[72765]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-1 systemd-logind[746]: Session 19 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-1 sshd-session[72564]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-1 sshd-session[72651]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-1 sshd-session[72680]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-1 systemd[1]: session-25.scope: Deactivated successfully.
Nov 25 09:33:31 compute-1 sshd-session[72622]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-1 sshd-session[72738]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-1 systemd-logind[746]: Session 28 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-1 systemd[1]: session-22.scope: Deactivated successfully.
Nov 25 09:33:31 compute-1 systemd[1]: session-26.scope: Deactivated successfully.
Nov 25 09:33:31 compute-1 systemd[1]: session-24.scope: Deactivated successfully.
Nov 25 09:33:31 compute-1 systemd[1]: session-29.scope: Deactivated successfully.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Session 21 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-1 systemd[1]: session-31.scope: Deactivated successfully.
Nov 25 09:33:31 compute-1 systemd[1]: session-31.scope: Consumed 49.880s CPU time.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Session 30 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Session 23 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 09:33:31 compute-1 systemd-logind[746]: Session 27 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: pidfile_write: ignore empty --pid-file
Nov 25 09:33:31 compute-1 systemd-logind[746]: Session 25 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Session 22 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Session 29 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Session 24 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Session 31 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Session 26 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Removed session 19.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Removed session 21.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Removed session 30.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Removed session 23.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Removed session 28.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Removed session 27.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Removed session 25.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Removed session 22.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Removed session 26.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Removed session 24.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Removed session 29.
Nov 25 09:33:31 compute-1 systemd-logind[746]: Removed session 31.
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'alerts'
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:33:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:31.621+0000 7f0b7c9eb140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'balancer'
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:33:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:31.691+0000 7f0b7c9eb140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:33:31 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'cephadm'
Nov 25 09:33:31 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e29 e29: 3 total, 3 up, 3 in
Nov 25 09:33:31 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Nov 25 09:33:31 compute-1 ceph-mon[79643]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 09:33:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:32 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'crash'
Nov 25 09:33:32 compute-1 ceph-mgr[79928]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:33:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:32.352+0000 7f0b7c9eb140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:33:32 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'dashboard'
Nov 25 09:33:32 compute-1 ceph-mon[79643]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 09:33:32 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1685845904' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 25 09:33:32 compute-1 ceph-mon[79643]: mgrmap e12: compute-0.zcfgby(active, since 118s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:32 compute-1 ceph-mon[79643]: osdmap e29: 3 total, 3 up, 3 in
Nov 25 09:33:32 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 09:33:32 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 09:33:32 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 09:33:32 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 09:33:32 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 09:33:32 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'devicehealth'
Nov 25 09:33:32 compute-1 ceph-mgr[79928]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:33:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:32.889+0000 7f0b7c9eb140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:33:32 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 09:33:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e30 e30: 3 total, 3 up, 3 in
Nov 25 09:33:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 09:33:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 09:33:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]:   from numpy import show_config as show_numpy_config
Nov 25 09:33:33 compute-1 ceph-mgr[79928]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:33:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:33.031+0000 7f0b7c9eb140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:33:33 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'influx'
Nov 25 09:33:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:33.093+0000 7f0b7c9eb140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:33:33 compute-1 ceph-mgr[79928]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:33:33 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'insights'
Nov 25 09:33:33 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'iostat'
Nov 25 09:33:33 compute-1 ceph-mgr[79928]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:33:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:33.212+0000 7f0b7c9eb140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:33:33 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'k8sevents'
Nov 25 09:33:33 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'localpool'
Nov 25 09:33:33 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 09:33:33 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'mirroring'
Nov 25 09:33:33 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'nfs'
Nov 25 09:33:33 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e31 e31: 3 total, 3 up, 3 in
Nov 25 09:33:33 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 31 pg[10.0( empty local-lis/les=0/0 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [0] r=0 lpr=31 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:33:33 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 25 09:33:33 compute-1 ceph-mon[79643]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 09:33:33 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 09:33:33 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 09:33:33 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 09:33:33 compute-1 ceph-mon[79643]: osdmap e30: 3 total, 3 up, 3 in
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'orchestrator'
Nov 25 09:33:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:34.060+0000 7f0b7c9eb140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 09:33:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:34.246+0000 7f0b7c9eb140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'osd_support'
Nov 25 09:33:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:34.311+0000 7f0b7c9eb140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 09:33:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:34.369+0000 7f0b7c9eb140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'progress'
Nov 25 09:33:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:34.437+0000 7f0b7c9eb140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'prometheus'
Nov 25 09:33:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:34.498+0000 7f0b7c9eb140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'rbd_support'
Nov 25 09:33:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:34.794+0000 7f0b7c9eb140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'restful'
Nov 25 09:33:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:34.878+0000 7f0b7c9eb140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e32 e32: 3 total, 3 up, 3 in
Nov 25 09:33:34 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 32 pg[10.0( empty local-lis/les=31/32 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [0] r=0 lpr=31 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:33:34 compute-1 ceph-mon[79643]: osdmap e31: 3 total, 3 up, 3 in
Nov 25 09:33:34 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 09:33:34 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 09:33:34 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 09:33:34 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 09:33:34 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 09:33:34 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 09:33:34 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 09:33:34 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 09:33:34 compute-1 ceph-mon[79643]: osdmap e32: 3 total, 3 up, 3 in
Nov 25 09:33:35 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'rgw'
Nov 25 09:33:35 compute-1 ceph-mgr[79928]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'rook'
Nov 25 09:33:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:35.269+0000 7f0b7c9eb140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-1 ceph-mgr[79928]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:35.743+0000 7f0b7c9eb140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'selftest'
Nov 25 09:33:35 compute-1 ceph-mgr[79928]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'snap_schedule'
Nov 25 09:33:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:35.805+0000 7f0b7c9eb140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-1 ceph-mgr[79928]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'stats'
Nov 25 09:33:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:35.874+0000 7f0b7c9eb140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'status'
Nov 25 09:33:35 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e33 e33: 3 total, 3 up, 3 in
Nov 25 09:33:35 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 25 09:33:35 compute-1 ceph-mon[79643]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:36.005+0000 7f0b7c9eb140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'telegraf'
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'telemetry'
Nov 25 09:33:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:36.068+0000 7f0b7c9eb140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 09:33:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:36.201+0000 7f0b7c9eb140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:36.391+0000 7f0b7c9eb140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'volumes'
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'zabbix'
Nov 25 09:33:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:36.619+0000 7f0b7c9eb140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e34 e34: 3 total, 3 up, 3 in
Nov 25 09:33:36 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 25 09:33:36 compute-1 ceph-mon[79643]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:36.685+0000 7f0b7c9eb140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: mgr load Constructed class from module: dashboard
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: [dashboard INFO root] Starting engine...
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: ms_deliver_dispatch: unhandled message 0x55ff33631a00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 25 09:33:36 compute-1 ceph-mgr[79928]: [dashboard INFO root] Engine started...
Nov 25 09:33:36 compute-1 ceph-mon[79643]: osdmap e33: 3 total, 3 up, 3 in
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: Active manager daemon compute-0.zcfgby restarted
Nov 25 09:33:36 compute-1 ceph-mon[79643]: Activating manager daemon compute-0.zcfgby
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 09:33:36 compute-1 ceph-mon[79643]: osdmap e34: 3 total, 3 up, 3 in
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: mgrmap e13: compute-0.zcfgby(active, starting, since 0.0171463s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: Manager daemon compute-0.zcfgby is now available
Nov 25 09:33:36 compute-1 ceph-mon[79643]: Standby manager daemon compute-2.flybft restarted
Nov 25 09:33:36 compute-1 ceph-mon[79643]: Standby manager daemon compute-2.flybft started
Nov 25 09:33:36 compute-1 ceph-mon[79643]: Standby manager daemon compute-1.plffrn restarted
Nov 25 09:33:36 compute-1 ceph-mon[79643]: Standby manager daemon compute-1.plffrn started
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 09:33:36 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 09:33:37 compute-1 sshd-session[82081]: Accepted publickey for ceph-admin from 192.168.122.100 port 52604 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:33:37 compute-1 systemd-logind[746]: New session 32 of user ceph-admin.
Nov 25 09:33:37 compute-1 systemd[1]: Started Session 32 of User ceph-admin.
Nov 25 09:33:37 compute-1 sshd-session[82081]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:33:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:37 compute-1 sudo[82085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:37 compute-1 sudo[82085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:37 compute-1 sudo[82085]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:37 compute-1 sudo[82110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:33:37 compute-1 sudo[82110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:37 compute-1 podman[82191]: 2025-11-25 09:33:37.544972724 +0000 UTC m=+0.039763834 container exec 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:33:37 compute-1 podman[82191]: 2025-11-25 09:33:37.619709442 +0000 UTC m=+0.114500562 container exec_died 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 25 09:33:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e35 e35: 3 total, 3 up, 3 in
Nov 25 09:33:37 compute-1 radosgw[81450]: v1 topic migration: starting v1 topic migration..
Nov 25 09:33:37 compute-1 radosgw[81450]: LDAP not started since no server URIs were provided in the configuration.
Nov 25 09:33:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-rgw-rgw-compute-1-lyczeh[81446]: 2025-11-25T09:33:37.774+0000 7fe936516980 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 25 09:33:37 compute-1 radosgw[81450]: v1 topic migration: finished v1 topic migration
Nov 25 09:33:37 compute-1 radosgw[81450]: framework: beast
Nov 25 09:33:37 compute-1 radosgw[81450]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 25 09:33:37 compute-1 radosgw[81450]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 25 09:33:37 compute-1 radosgw[81450]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 25 09:33:37 compute-1 radosgw[81450]: starting handler: beast
Nov 25 09:33:37 compute-1 radosgw[81450]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 09:33:37 compute-1 radosgw[81450]: mgrc service_daemon_register rgw.24152 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC 7763 64-Core Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.lyczeh,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865372,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=6af48147-6aba-44e3-91a3-565a32433f82,zone_name=default,zonegroup_id=7f877101-a613-42fa-9374-f143e99606e2,zonegroup_name=default}
Nov 25 09:33:37 compute-1 sudo[82110]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:37 compute-1 sudo[82311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:37 compute-1 sudo[82311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:37 compute-1 sudo[82311]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:37 compute-1 sudo[82336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:33:37 compute-1 sudo[82336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:38 compute-1 sudo[82336]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:38 compute-1 sudo[82390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:38 compute-1 sudo[82390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:38 compute-1 sudo[82390]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:38 compute-1 sudo[82415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 25 09:33:38 compute-1 sudo[82415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:38 compute-1 ceph-mon[79643]: mgrmap e14: compute-0.zcfgby(active, since 1.02396s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:38 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 09:33:38 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 09:33:38 compute-1 ceph-mon[79643]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 09:33:38 compute-1 ceph-mon[79643]: osdmap e35: 3 total, 3 up, 3 in
Nov 25 09:33:38 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-1 ceph-mon[79643]: [25/Nov/2025:09:33:38] ENGINE Bus STARTING
Nov 25 09:33:38 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-1 ceph-mon[79643]: [25/Nov/2025:09:33:38] ENGINE Serving on http://192.168.122.100:8765
Nov 25 09:33:38 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-1 ceph-mon[79643]: [25/Nov/2025:09:33:38] ENGINE Serving on https://192.168.122.100:7150
Nov 25 09:33:38 compute-1 ceph-mon[79643]: [25/Nov/2025:09:33:38] ENGINE Bus STARTED
Nov 25 09:33:38 compute-1 ceph-mon[79643]: [25/Nov/2025:09:33:38] ENGINE Client ('192.168.122.100', 57652) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 09:33:38 compute-1 ceph-mon[79643]: from='client.14454 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:38 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-1 sudo[82415]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 sudo[82456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:33:39 compute-1 sudo[82456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82456]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 sudo[82481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:33:39 compute-1 sudo[82481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82481]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 sudo[82506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:39 compute-1 sudo[82506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82506]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 sudo[82531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:39 compute-1 sudo[82531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82531]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 sudo[82556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:39 compute-1 sudo[82556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82556]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 sudo[82604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:39 compute-1 sudo[82604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82604]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 sudo[82629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:39 compute-1 sudo[82629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82629]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 sudo[82654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 25 09:33:39 compute-1 sudo[82654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82654]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 sudo[82679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:39 compute-1 sudo[82679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82679]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 sudo[82704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:39 compute-1 sudo[82704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82704]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 ceph-mon[79643]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 09:33:39 compute-1 ceph-mon[79643]: Cluster is now healthy
Nov 25 09:33:39 compute-1 ceph-mon[79643]: pgmap v5: 11 pgs: 11 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:39 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:39 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:39 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 25 09:33:39 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:39 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:39 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:33:39 compute-1 ceph-mon[79643]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:33:39 compute-1 ceph-mon[79643]: Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:33:39 compute-1 ceph-mon[79643]: Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:33:39 compute-1 ceph-mon[79643]: from='client.14466 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:39 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-1 ceph-mon[79643]: mgrmap e15: compute-0.zcfgby(active, since 2s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:39 compute-1 ceph-mon[79643]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:39 compute-1 ceph-mon[79643]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:39 compute-1 ceph-mon[79643]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:39 compute-1 sudo[82729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:39 compute-1 sudo[82729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82729]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 sudo[82754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:39 compute-1 sudo[82754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82754]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 sudo[82779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:39 compute-1 sudo[82779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82779]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 sudo[82827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:39 compute-1 sudo[82827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82827]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 sudo[82852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:39 compute-1 sudo[82852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82852]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-1 sudo[82877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:39 compute-1 sudo[82877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-1 sudo[82877]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[82902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:33:40 compute-1 sudo[82902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[82902]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[82927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:33:40 compute-1 sudo[82927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[82927]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[82952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-1 sudo[82952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[82952]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[82977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:40 compute-1 sudo[82977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[82977]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[83002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-1 sudo[83002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[83002]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[83050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-1 sudo[83050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[83050]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[83075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-1 sudo[83075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[83075]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[83100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:40 compute-1 sudo[83100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[83100]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[83125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:40 compute-1 sudo[83125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[83125]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[83150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:40 compute-1 sudo[83150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[83150]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[83175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-1 sudo[83175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[83175]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[83200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:40 compute-1 sudo[83200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[83200]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[83225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-1 sudo[83225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[83225]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[83273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-1 sudo[83273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[83273]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[83298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-1 sudo[83298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[83298]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 ceph-mon[79643]: from='client.14472 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:40 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-1 ceph-mon[79643]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:40 compute-1 ceph-mon[79643]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:40 compute-1 ceph-mon[79643]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:40 compute-1 ceph-mon[79643]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:40 compute-1 ceph-mon[79643]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:40 compute-1 ceph-mon[79643]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:40 compute-1 ceph-mon[79643]: from='client.14478 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:40 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-1 sudo[83323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:40 compute-1 sudo[83323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[83323]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[83348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:40 compute-1 sudo[83348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-1 sudo[83348]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-1 sudo[83373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:40 compute-1 sudo[83373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:41 compute-1 systemd[1]: Reloading.
Nov 25 09:33:41 compute-1 systemd-rc-local-generator[83457]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:33:41 compute-1 systemd-sysv-generator[83462]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:33:41 compute-1 systemd[1]: Reloading.
Nov 25 09:33:41 compute-1 sshd-session[82084]: Connection closed by 192.168.122.100 port 52604
Nov 25 09:33:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: ignoring --setuser ceph since I am not root
Nov 25 09:33:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: ignoring --setgroup ceph since I am not root
Nov 25 09:33:41 compute-1 systemd-rc-local-generator[83492]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:33:41 compute-1 ceph-mgr[79928]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 09:33:41 compute-1 ceph-mgr[79928]: pidfile_write: ignore empty --pid-file
Nov 25 09:33:41 compute-1 systemd-sysv-generator[83495]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:33:41 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'alerts'
Nov 25 09:33:41 compute-1 ceph-mgr[79928]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:33:41 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'balancer'
Nov 25 09:33:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:41.683+0000 7f5793535140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:33:41 compute-1 sshd-session[82081]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:41 compute-1 systemd[1]: Starting Ceph node-exporter.compute-1 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:33:41 compute-1 systemd-logind[746]: Session 32 logged out. Waiting for processes to exit.
Nov 25 09:33:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:41.755+0000 7f5793535140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:33:41 compute-1 ceph-mgr[79928]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:33:41 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'cephadm'
Nov 25 09:33:41 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:41 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:41 compute-1 ceph-mon[79643]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:41 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/321985415' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 25 09:33:41 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/321985415' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 25 09:33:41 compute-1 ceph-mon[79643]: mgrmap e16: compute-0.zcfgby(active, since 4s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:41 compute-1 bash[83566]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Nov 25 09:33:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:42 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'crash'
Nov 25 09:33:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:42.455+0000 7f5793535140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:33:42 compute-1 ceph-mgr[79928]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:33:42 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'dashboard'
Nov 25 09:33:42 compute-1 bash[83566]: Getting image source signatures
Nov 25 09:33:42 compute-1 bash[83566]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Nov 25 09:33:42 compute-1 bash[83566]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Nov 25 09:33:42 compute-1 bash[83566]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Nov 25 09:33:42 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2461625104' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 25 09:33:42 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'devicehealth'
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:43.014+0000 7f5793535140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:33:43 compute-1 ceph-mgr[79928]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:33:43 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]:   from numpy import show_config as show_numpy_config
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:43.156+0000 7f5793535140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:33:43 compute-1 ceph-mgr[79928]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:33:43 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'influx'
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:43.220+0000 7f5793535140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:33:43 compute-1 ceph-mgr[79928]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:33:43 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'insights'
Nov 25 09:33:43 compute-1 bash[83566]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Nov 25 09:33:43 compute-1 bash[83566]: Writing manifest to image destination
Nov 25 09:33:43 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'iostat'
Nov 25 09:33:43 compute-1 podman[83566]: 2025-11-25 09:33:43.286549094 +0000 UTC m=+1.407090732 container create 2a1e927df99a4f6883dd678d1b8ad8ebba4024ed0429ba56ea266629e5f2b0b3 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:33:43 compute-1 podman[83566]: 2025-11-25 09:33:43.273308082 +0000 UTC m=+1.393849730 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Nov 25 09:33:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/218e690cf1be2d09d985eece31397d812ffc2b57e35c9d3c89e935cd76f00418/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:43 compute-1 podman[83566]: 2025-11-25 09:33:43.326638521 +0000 UTC m=+1.447180169 container init 2a1e927df99a4f6883dd678d1b8ad8ebba4024ed0429ba56ea266629e5f2b0b3 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:33:43 compute-1 podman[83566]: 2025-11-25 09:33:43.33093129 +0000 UTC m=+1.451472918 container start 2a1e927df99a4f6883dd678d1b8ad8ebba4024ed0429ba56ea266629e5f2b0b3 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:33:43 compute-1 bash[83566]: 2a1e927df99a4f6883dd678d1b8ad8ebba4024ed0429ba56ea266629e5f2b0b3
Nov 25 09:33:43 compute-1 systemd[1]: Started Ceph node-exporter.compute-1 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.340Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.342Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.343Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.343Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.343Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.344Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.344Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.344Z caller=node_exporter.go:117 level=info collector=arp
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.344Z caller=node_exporter.go:117 level=info collector=bcache
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.344Z caller=node_exporter.go:117 level=info collector=bonding
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:43.343+0000 7f5793535140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:33:43 compute-1 ceph-mgr[79928]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:33:43 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'k8sevents'
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.345Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.345Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.345Z caller=node_exporter.go:117 level=info collector=cpu
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.345Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.345Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.345Z caller=node_exporter.go:117 level=info collector=dmi
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.345Z caller=node_exporter.go:117 level=info collector=edac
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.345Z caller=node_exporter.go:117 level=info collector=entropy
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.345Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.345Z caller=node_exporter.go:117 level=info collector=filefd
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.345Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.346Z caller=node_exporter.go:117 level=info collector=hwmon
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.346Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.346Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.346Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.346Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.346Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.346Z caller=node_exporter.go:117 level=info collector=netclass
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.346Z caller=node_exporter.go:117 level=info collector=netdev
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.346Z caller=node_exporter.go:117 level=info collector=netstat
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.346Z caller=node_exporter.go:117 level=info collector=nfs
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.346Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.346Z caller=node_exporter.go:117 level=info collector=nvme
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.346Z caller=node_exporter.go:117 level=info collector=os
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.347Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.347Z caller=node_exporter.go:117 level=info collector=pressure
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.347Z caller=node_exporter.go:117 level=info collector=rapl
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.347Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.347Z caller=node_exporter.go:117 level=info collector=selinux
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.347Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.347Z caller=node_exporter.go:117 level=info collector=softnet
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.347Z caller=node_exporter.go:117 level=info collector=stat
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.347Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.347Z caller=node_exporter.go:117 level=info collector=textfile
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.347Z caller=node_exporter.go:117 level=info collector=thermal_zone
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.347Z caller=node_exporter.go:117 level=info collector=time
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.347Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.362Z caller=node_exporter.go:117 level=info collector=uname
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.362Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.362Z caller=node_exporter.go:117 level=info collector=xfs
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.362Z caller=node_exporter.go:117 level=info collector=zfs
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.363Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Nov 25 09:33:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[83640]: ts=2025-11-25T09:33:43.363Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Nov 25 09:33:43 compute-1 sudo[83373]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:43 compute-1 systemd[1]: session-32.scope: Deactivated successfully.
Nov 25 09:33:43 compute-1 systemd[1]: session-32.scope: Consumed 3.837s CPU time.
Nov 25 09:33:43 compute-1 systemd-logind[746]: Removed session 32.
Nov 25 09:33:43 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'localpool'
Nov 25 09:33:43 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 09:33:43 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2461625104' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 25 09:33:43 compute-1 ceph-mon[79643]: mgrmap e17: compute-0.zcfgby(active, since 6s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:43 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'mirroring'
Nov 25 09:33:44 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'nfs'
Nov 25 09:33:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:44.216+0000 7f5793535140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-1 ceph-mgr[79928]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'orchestrator'
Nov 25 09:33:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:44.403+0000 7f5793535140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-1 ceph-mgr[79928]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 09:33:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:44.477+0000 7f5793535140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-1 ceph-mgr[79928]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'osd_support'
Nov 25 09:33:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:44.538+0000 7f5793535140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-1 ceph-mgr[79928]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 09:33:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:44.608+0000 7f5793535140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-1 ceph-mgr[79928]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'progress'
Nov 25 09:33:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:44.672+0000 7f5793535140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-1 ceph-mgr[79928]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'prometheus'
Nov 25 09:33:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:44.980+0000 7f5793535140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-1 ceph-mgr[79928]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'rbd_support'
Nov 25 09:33:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:45.065+0000 7f5793535140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-1 ceph-mgr[79928]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'restful'
Nov 25 09:33:45 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'rgw'
Nov 25 09:33:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:45.437+0000 7f5793535140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-1 ceph-mgr[79928]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'rook'
Nov 25 09:33:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:45.915+0000 7f5793535140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-1 ceph-mgr[79928]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'selftest'
Nov 25 09:33:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:45.976+0000 7f5793535140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-1 ceph-mgr[79928]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'snap_schedule'
Nov 25 09:33:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:46.046+0000 7f5793535140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'stats'
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'status'
Nov 25 09:33:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:46.173+0000 7f5793535140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'telegraf'
Nov 25 09:33:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:46.233+0000 7f5793535140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'telemetry'
Nov 25 09:33:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:46.364+0000 7f5793535140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 09:33:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:46.552+0000 7f5793535140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'volumes'
Nov 25 09:33:46 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e36 e36: 3 total, 3 up, 3 in
Nov 25 09:33:46 compute-1 ceph-mon[79643]: Active manager daemon compute-0.zcfgby restarted
Nov 25 09:33:46 compute-1 ceph-mon[79643]: Activating manager daemon compute-0.zcfgby
Nov 25 09:33:46 compute-1 ceph-mon[79643]: osdmap e36: 3 total, 3 up, 3 in
Nov 25 09:33:46 compute-1 ceph-mon[79643]: mgrmap e18: compute-0.zcfgby(active, starting, since 0.0220563s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:46.781+0000 7f5793535140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'zabbix'
Nov 25 09:33:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:46.844+0000 7f5793535140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: ms_deliver_dispatch: unhandled message 0x55bec26b7a00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr respawn  1: '-n'
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr respawn  2: 'mgr.compute-1.plffrn'
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr respawn  3: '-f'
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr respawn  4: '--setuser'
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr respawn  5: 'ceph'
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr respawn  6: '--setgroup'
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr respawn  7: 'ceph'
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr respawn  8: '--default-log-to-file=false'
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr respawn  9: '--default-log-to-journald=true'
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr respawn  exe_path /proc/self/exe
Nov 25 09:33:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: ignoring --setuser ceph since I am not root
Nov 25 09:33:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: ignoring --setgroup ceph since I am not root
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: pidfile_write: ignore empty --pid-file
Nov 25 09:33:46 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'alerts'
Nov 25 09:33:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:47.022+0000 7fb4c26d0140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:33:47 compute-1 ceph-mgr[79928]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:33:47 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'balancer'
Nov 25 09:33:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:47.092+0000 7fb4c26d0140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:33:47 compute-1 ceph-mgr[79928]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:33:47 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'cephadm'
Nov 25 09:33:47 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'crash'
Nov 25 09:33:47 compute-1 ceph-mon[79643]: Standby manager daemon compute-2.flybft restarted
Nov 25 09:33:47 compute-1 ceph-mon[79643]: Standby manager daemon compute-2.flybft started
Nov 25 09:33:47 compute-1 ceph-mon[79643]: Standby manager daemon compute-1.plffrn restarted
Nov 25 09:33:47 compute-1 ceph-mon[79643]: Standby manager daemon compute-1.plffrn started
Nov 25 09:33:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:47.766+0000 7fb4c26d0140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:33:47 compute-1 ceph-mgr[79928]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:33:47 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'dashboard'
Nov 25 09:33:48 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'devicehealth'
Nov 25 09:33:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:48.300+0000 7fb4c26d0140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-1 ceph-mgr[79928]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 09:33:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 09:33:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 09:33:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]:   from numpy import show_config as show_numpy_config
Nov 25 09:33:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:48.439+0000 7fb4c26d0140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-1 ceph-mgr[79928]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'influx'
Nov 25 09:33:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:48.500+0000 7fb4c26d0140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-1 ceph-mgr[79928]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'insights'
Nov 25 09:33:48 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'iostat'
Nov 25 09:33:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:48.617+0000 7fb4c26d0140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-1 ceph-mgr[79928]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'k8sevents'
Nov 25 09:33:48 compute-1 ceph-mon[79643]: mgrmap e19: compute-0.zcfgby(active, starting, since 1.07823s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:48 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'localpool'
Nov 25 09:33:48 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 09:33:49 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'mirroring'
Nov 25 09:33:49 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'nfs'
Nov 25 09:33:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:49.452+0000 7fb4c26d0140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-1 ceph-mgr[79928]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'orchestrator'
Nov 25 09:33:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:49.636+0000 7fb4c26d0140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-1 ceph-mgr[79928]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 09:33:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:49.702+0000 7fb4c26d0140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-1 ceph-mgr[79928]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'osd_support'
Nov 25 09:33:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:49.759+0000 7fb4c26d0140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-1 ceph-mgr[79928]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 09:33:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:49.826+0000 7fb4c26d0140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-1 ceph-mgr[79928]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'progress'
Nov 25 09:33:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:49.887+0000 7fb4c26d0140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-1 ceph-mgr[79928]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'prometheus'
Nov 25 09:33:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:50.181+0000 7fb4c26d0140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:33:50 compute-1 ceph-mgr[79928]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:33:50 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'rbd_support'
Nov 25 09:33:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:50.263+0000 7fb4c26d0140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:33:50 compute-1 ceph-mgr[79928]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:33:50 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'restful'
Nov 25 09:33:50 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'rgw'
Nov 25 09:33:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:50.629+0000 7fb4c26d0140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:33:50 compute-1 ceph-mgr[79928]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:33:50 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'rook'
Nov 25 09:33:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:51.107+0000 7fb4c26d0140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'selftest'
Nov 25 09:33:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:51.169+0000 7fb4c26d0140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'snap_schedule'
Nov 25 09:33:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:51.238+0000 7fb4c26d0140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'stats'
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'status'
Nov 25 09:33:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:51.366+0000 7fb4c26d0140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'telegraf'
Nov 25 09:33:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:51.427+0000 7fb4c26d0140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'telemetry'
Nov 25 09:33:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:51.558+0000 7fb4c26d0140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 09:33:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:51.747+0000 7fb4c26d0140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'volumes'
Nov 25 09:33:51 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e37 e37: 3 total, 3 up, 3 in
Nov 25 09:33:51 compute-1 ceph-mon[79643]: Active manager daemon compute-0.zcfgby restarted
Nov 25 09:33:51 compute-1 ceph-mon[79643]: Activating manager daemon compute-0.zcfgby
Nov 25 09:33:51 compute-1 ceph-mon[79643]: osdmap e37: 3 total, 3 up, 3 in
Nov 25 09:33:51 compute-1 ceph-mon[79643]: mgrmap e20: compute-0.zcfgby(active, starting, since 0.0170664s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:51 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:33:51 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:51 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:33:51 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"}]: dispatch
Nov 25 09:33:51 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"}]: dispatch
Nov 25 09:33:51 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"}]: dispatch
Nov 25 09:33:51 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:33:51 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:33:51 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:51 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 09:33:51 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:33:51 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 09:33:51 compute-1 ceph-mon[79643]: Manager daemon compute-0.zcfgby is now available
Nov 25 09:33:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:51.976+0000 7fb4c26d0140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'zabbix'
Nov 25 09:33:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:33:52.037+0000 7fb4c26d0140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:33:52 compute-1 ceph-mgr[79928]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:33:52 compute-1 ceph-mgr[79928]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:52 compute-1 ceph-mgr[79928]: mgr load Constructed class from module: dashboard
Nov 25 09:33:52 compute-1 ceph-mgr[79928]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Nov 25 09:33:52 compute-1 ceph-mgr[79928]: ms_deliver_dispatch: unhandled message 0x557a448a7860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 25 09:33:52 compute-1 ceph-mgr[79928]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 25 09:33:52 compute-1 ceph-mgr[79928]: [dashboard INFO root] Starting engine...
Nov 25 09:33:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:52 compute-1 ceph-mgr[79928]: [dashboard INFO root] Engine started...
Nov 25 09:33:52 compute-1 sshd-session[83693]: Accepted publickey for ceph-admin from 192.168.122.100 port 50276 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:33:52 compute-1 systemd-logind[746]: New session 33 of user ceph-admin.
Nov 25 09:33:52 compute-1 systemd[1]: Started Session 33 of User ceph-admin.
Nov 25 09:33:52 compute-1 sshd-session[83693]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:33:52 compute-1 sudo[83697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:52 compute-1 sudo[83697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:52 compute-1 sudo[83697]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:52 compute-1 sudo[83722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:33:52 compute-1 sudo[83722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:52 compute-1 podman[83802]: 2025-11-25 09:33:52.847659591 +0000 UTC m=+0.046369622 container exec 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).mds e2 new map
Nov 25 09:33:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).mds e2 print_map
                                           e2
                                           btime 2025-11-25T09:33:52:871701+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T09:33:52.871685+0000
                                           modified        2025-11-25T09:33:52.871685+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Nov 25 09:33:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e38 e38: 3 total, 3 up, 3 in
Nov 25 09:33:52 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 09:33:52 compute-1 ceph-mon[79643]: Standby manager daemon compute-2.flybft restarted
Nov 25 09:33:52 compute-1 ceph-mon[79643]: Standby manager daemon compute-2.flybft started
Nov 25 09:33:52 compute-1 ceph-mon[79643]: Standby manager daemon compute-1.plffrn restarted
Nov 25 09:33:52 compute-1 ceph-mon[79643]: Standby manager daemon compute-1.plffrn started
Nov 25 09:33:52 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 09:33:52 compute-1 ceph-mon[79643]: mgrmap e21: compute-0.zcfgby(active, since 1.02783s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:52 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 25 09:33:52 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 25 09:33:52 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 25 09:33:52 compute-1 ceph-mon[79643]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 09:33:52 compute-1 ceph-mon[79643]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 25 09:33:52 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 25 09:33:52 compute-1 ceph-mon[79643]: osdmap e38: 3 total, 3 up, 3 in
Nov 25 09:33:52 compute-1 ceph-mon[79643]: fsmap cephfs:0
Nov 25 09:33:52 compute-1 podman[83802]: 2025-11-25 09:33:52.927688112 +0000 UTC m=+0.126398142 container exec_died 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:53 compute-1 podman[83911]: 2025-11-25 09:33:53.299104928 +0000 UTC m=+0.038851975 container exec 2a1e927df99a4f6883dd678d1b8ad8ebba4024ed0429ba56ea266629e5f2b0b3 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:33:53 compute-1 podman[83932]: 2025-11-25 09:33:53.356504694 +0000 UTC m=+0.045351002 container exec_died 2a1e927df99a4f6883dd678d1b8ad8ebba4024ed0429ba56ea266629e5f2b0b3 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:33:53 compute-1 podman[83911]: 2025-11-25 09:33:53.359212638 +0000 UTC m=+0.098959685 container exec_died 2a1e927df99a4f6883dd678d1b8ad8ebba4024ed0429ba56ea266629e5f2b0b3 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:33:53 compute-1 sudo[83722]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:53 compute-1 sudo[83940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:53 compute-1 sudo[83940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:53 compute-1 sudo[83940]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:53 compute-1 sudo[83965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:33:53 compute-1 sudo[83965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:53 compute-1 systemd[72489]: Starting Mark boot as successful...
Nov 25 09:33:53 compute-1 systemd[72489]: Finished Mark boot as successful.
Nov 25 09:33:53 compute-1 sudo[83965]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:53 compute-1 ceph-mon[79643]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:53 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-1 ceph-mon[79643]: [25/Nov/2025:09:33:53] ENGINE Bus STARTING
Nov 25 09:33:53 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-1 ceph-mon[79643]: [25/Nov/2025:09:33:53] ENGINE Serving on https://192.168.122.100:7150
Nov 25 09:33:53 compute-1 ceph-mon[79643]: [25/Nov/2025:09:33:53] ENGINE Client ('192.168.122.100', 59432) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 09:33:53 compute-1 ceph-mon[79643]: [25/Nov/2025:09:33:53] ENGINE Serving on http://192.168.122.100:8765
Nov 25 09:33:53 compute-1 ceph-mon[79643]: [25/Nov/2025:09:33:53] ENGINE Bus STARTED
Nov 25 09:33:53 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-1 ceph-mon[79643]: from='client.14553 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:53 compute-1 ceph-mon[79643]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:53 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-1 sudo[84020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:53 compute-1 sudo[84020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:53 compute-1 sudo[84020]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:53 compute-1 sudo[84045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 25 09:33:53 compute-1 sudo[84045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-1 sudo[84045]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-1 sudo[84085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:33:54 compute-1 sudo[84085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-1 sudo[84085]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-1 sudo[84110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:33:54 compute-1 sudo[84110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-1 sudo[84110]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-1 sudo[84135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:54 compute-1 sudo[84135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-1 sudo[84135]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-1 sudo[84160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:54 compute-1 sudo[84160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-1 sudo[84160]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-1 sudo[84185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:54 compute-1 sudo[84185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-1 sudo[84185]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-1 sudo[84233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:54 compute-1 sudo[84233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-1 sudo[84233]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-1 sudo[84258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:54 compute-1 sudo[84258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-1 sudo[84258]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-1 sudo[84283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 25 09:33:54 compute-1 sudo[84283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-1 sudo[84283]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-1 sudo[84308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:54 compute-1 sudo[84308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-1 sudo[84308]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-1 sudo[84333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:54 compute-1 sudo[84333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-1 sudo[84333]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-1 ceph-mon[79643]: pgmap v5: 11 pgs: 11 active+clean; 454 KiB data, 84 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:54 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:54 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:54 compute-1 ceph-mon[79643]: from='client.14562 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:54 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Nov 25 09:33:54 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:54 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:54 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:33:54 compute-1 ceph-mon[79643]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:33:54 compute-1 ceph-mon[79643]: Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:33:54 compute-1 ceph-mon[79643]: Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:33:54 compute-1 ceph-mon[79643]: mgrmap e22: compute-0.zcfgby(active, since 2s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:54 compute-1 sudo[84358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:54 compute-1 sudo[84358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-1 sudo[84358]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-1 sudo[84383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:54 compute-1 sudo[84383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-1 sudo[84383]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:55 compute-1 sudo[84408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84408]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:55 compute-1 sudo[84456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84456]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:55 compute-1 sudo[84481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84481]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:55 compute-1 sudo[84506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84506]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e39 e39: 3 total, 3 up, 3 in
Nov 25 09:33:55 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 39 pg[12.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [0] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:33:55 compute-1 sudo[84531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:33:55 compute-1 sudo[84531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84531]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:33:55 compute-1 sudo[84556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84556]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-1 sudo[84581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84581]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:55 compute-1 sudo[84606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84606]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-1 sudo[84631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84631]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-1 sudo[84679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84679]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-1 sudo[84704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84704]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:55 compute-1 sudo[84729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84729]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:55 compute-1 sudo[84754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84754]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:55 compute-1 sudo[84779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84779]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-1 sudo[84804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84804]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:55 compute-1 sudo[84829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84829]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-1 sudo[84854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84854]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-1 sudo[84902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84902]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-1 sudo[84927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84927]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-1 sudo[84952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:55 compute-1 sudo[84952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-1 sudo[84952]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:56 compute-1 ceph-mon[79643]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:56 compute-1 ceph-mon[79643]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:56 compute-1 ceph-mon[79643]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:56 compute-1 ceph-mon[79643]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:56 compute-1 ceph-mon[79643]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:56 compute-1 ceph-mon[79643]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:56 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Nov 25 09:33:56 compute-1 ceph-mon[79643]: osdmap e39: 3 total, 3 up, 3 in
Nov 25 09:33:56 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Nov 25 09:33:56 compute-1 ceph-mon[79643]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:56 compute-1 ceph-mon[79643]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:56 compute-1 ceph-mon[79643]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:56 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Nov 25 09:33:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 40 pg[12.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [0] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:33:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Nov 25 09:33:57 compute-1 ceph-mon[79643]: pgmap v7: 12 pgs: 1 unknown, 11 active+clean; 454 KiB data, 84 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:57 compute-1 ceph-mon[79643]: Deploying daemon node-exporter.compute-2 on compute-2
Nov 25 09:33:57 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Nov 25 09:33:57 compute-1 ceph-mon[79643]: osdmap e40: 3 total, 3 up, 3 in
Nov 25 09:33:57 compute-1 ceph-mon[79643]: mgrmap e23: compute-0.zcfgby(active, since 4s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:57 compute-1 ceph-mon[79643]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:57 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:57 compute-1 ceph-mon[79643]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:57 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:57 compute-1 ceph-mon[79643]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 09:33:58 compute-1 ceph-mon[79643]: osdmap e41: 3 total, 3 up, 3 in
Nov 25 09:33:58 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3756297363' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 25 09:33:58 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3756297363' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 25 09:33:58 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:58 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:58 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:58 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:58 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:33:58 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:58 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:59 compute-1 ceph-mon[79643]: pgmap v10: 12 pgs: 1 unknown, 11 active+clean; 454 KiB data, 84 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:59 compute-1 ceph-mon[79643]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 09:33:59 compute-1 ceph-mon[79643]: mgrmap e24: compute-0.zcfgby(active, since 6s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:59 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3644062899' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 09:34:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2787207747' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:34:01 compute-1 ceph-mon[79643]: pgmap v11: 12 pgs: 12 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Nov 25 09:34:01 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1817509438' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 25 09:34:01 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:01 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:01 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.pwazzx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 09:34:01 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.pwazzx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 09:34:01 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:02 compute-1 ceph-mon[79643]: Deploying daemon mds.cephfs.compute-2.pwazzx on compute-2
Nov 25 09:34:02 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).mds e3 new map
Nov 25 09:34:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).mds e3 print_map
                                           e3
                                           btime 2025-11-25T09:34:02:633817+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        3
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T09:33:52.871685+0000
                                           modified        2025-11-25T09:34:02.633809+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14601}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.pwazzx{0:14601} state up:creating seq 1 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Nov 25 09:34:03 compute-1 ceph-mon[79643]: pgmap v12: 12 pgs: 12 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s
Nov 25 09:34:03 compute-1 ceph-mon[79643]: from='client.14598 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:34:03 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:03 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:03 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:03 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjveyw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 09:34:03 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjveyw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 09:34:03 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:03 compute-1 ceph-mon[79643]: Deploying daemon mds.cephfs.compute-0.wjveyw on compute-0
Nov 25 09:34:03 compute-1 ceph-mon[79643]: daemon mds.cephfs.compute-2.pwazzx assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 25 09:34:03 compute-1 ceph-mon[79643]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 25 09:34:03 compute-1 ceph-mon[79643]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 25 09:34:03 compute-1 ceph-mon[79643]: Cluster is now healthy
Nov 25 09:34:03 compute-1 ceph-mon[79643]: mds.? [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] up:boot
Nov 25 09:34:03 compute-1 ceph-mon[79643]: fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:creating}
Nov 25 09:34:03 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.pwazzx"}]: dispatch
Nov 25 09:34:03 compute-1 ceph-mon[79643]: daemon mds.cephfs.compute-2.pwazzx is now active in filesystem cephfs as rank 0
Nov 25 09:34:03 compute-1 sudo[84977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:03 compute-1 sudo[84977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:03 compute-1 sudo[84977]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:03 compute-1 sudo[85002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:03 compute-1 sudo[85002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:03 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).mds e4 new map
Nov 25 09:34:03 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).mds e4 print_map
                                           e4
                                           btime 2025-11-25T09:34:03:638492+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T09:33:52.871685+0000
                                           modified        2025-11-25T09:34:03.638490+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14601}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14601 members: 14601
                                           [mds.cephfs.compute-2.pwazzx{0:14601} state up:active seq 2 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.wjveyw{-1:24295} state up:standby seq 1 addr [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] compat {c=[1],r=[1],i=[1fff]}]
Nov 25 09:34:03 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).mds e5 new map
Nov 25 09:34:03 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).mds e5 print_map
                                           e5
                                           btime 2025-11-25T09:34:03:644218+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T09:33:52.871685+0000
                                           modified        2025-11-25T09:34:03.638490+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14601}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 14601 members: 14601
                                           [mds.cephfs.compute-2.pwazzx{0:14601} state up:active seq 2 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.wjveyw{-1:24295} state up:standby seq 1 addr [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] compat {c=[1],r=[1],i=[1fff]}]
Nov 25 09:34:03 compute-1 podman[85061]: 2025-11-25 09:34:03.788874287 +0000 UTC m=+0.026864075 container create ee6506aeda1565e619c6b94bf9eb386976d85227dfb7f78ae7f13a6bf6596bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:34:03 compute-1 systemd[1]: Started libpod-conmon-ee6506aeda1565e619c6b94bf9eb386976d85227dfb7f78ae7f13a6bf6596bd6.scope.
Nov 25 09:34:03 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:34:03 compute-1 podman[85061]: 2025-11-25 09:34:03.845280662 +0000 UTC m=+0.083270450 container init ee6506aeda1565e619c6b94bf9eb386976d85227dfb7f78ae7f13a6bf6596bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 25 09:34:03 compute-1 podman[85061]: 2025-11-25 09:34:03.849504783 +0000 UTC m=+0.087494570 container start ee6506aeda1565e619c6b94bf9eb386976d85227dfb7f78ae7f13a6bf6596bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:34:03 compute-1 keen_cray[85074]: 167 167
Nov 25 09:34:03 compute-1 systemd[1]: libpod-ee6506aeda1565e619c6b94bf9eb386976d85227dfb7f78ae7f13a6bf6596bd6.scope: Deactivated successfully.
Nov 25 09:34:03 compute-1 podman[85061]: 2025-11-25 09:34:03.852947991 +0000 UTC m=+0.090937799 container attach ee6506aeda1565e619c6b94bf9eb386976d85227dfb7f78ae7f13a6bf6596bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:34:03 compute-1 podman[85061]: 2025-11-25 09:34:03.853109665 +0000 UTC m=+0.091099453 container died ee6506aeda1565e619c6b94bf9eb386976d85227dfb7f78ae7f13a6bf6596bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cray, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-e1773cacf76fd28dc4bb10bf36831a4e346712f489592d32f9a11edb3dfa2964-merged.mount: Deactivated successfully.
Nov 25 09:34:03 compute-1 podman[85061]: 2025-11-25 09:34:03.870307253 +0000 UTC m=+0.108297041 container remove ee6506aeda1565e619c6b94bf9eb386976d85227dfb7f78ae7f13a6bf6596bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cray, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:03 compute-1 podman[85061]: 2025-11-25 09:34:03.777062127 +0000 UTC m=+0.015051925 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:03 compute-1 systemd[1]: libpod-conmon-ee6506aeda1565e619c6b94bf9eb386976d85227dfb7f78ae7f13a6bf6596bd6.scope: Deactivated successfully.
Nov 25 09:34:03 compute-1 systemd[1]: Reloading.
Nov 25 09:34:03 compute-1 systemd-rc-local-generator[85109]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:03 compute-1 systemd-sysv-generator[85112]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:04 compute-1 systemd[1]: Reloading.
Nov 25 09:34:04 compute-1 systemd-rc-local-generator[85150]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:04 compute-1 systemd-sysv-generator[85153]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:04 compute-1 systemd[1]: Starting Ceph mds.cephfs.compute-1.knpqas for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:34:04 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:04 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:04 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:04 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.knpqas", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 09:34:04 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.knpqas", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 09:34:04 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:04 compute-1 ceph-mon[79643]: Deploying daemon mds.cephfs.compute-1.knpqas on compute-1
Nov 25 09:34:04 compute-1 ceph-mon[79643]: mds.? [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] up:active
Nov 25 09:34:04 compute-1 ceph-mon[79643]: mds.? [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] up:boot
Nov 25 09:34:04 compute-1 ceph-mon[79643]: fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:active} 1 up:standby
Nov 25 09:34:04 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.wjveyw"}]: dispatch
Nov 25 09:34:04 compute-1 ceph-mon[79643]: fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:active} 1 up:standby
Nov 25 09:34:04 compute-1 podman[85202]: 2025-11-25 09:34:04.457696428 +0000 UTC m=+0.027089699 container create 9d4f314109b600691c35dbe55a0cd7da0b40fdaeb234728ced65696df0c91ce3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mds-cephfs-compute-1-knpqas, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:34:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d64432441de6581d42731337131b798d76095db2b39e9de252315821cdc1faf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d64432441de6581d42731337131b798d76095db2b39e9de252315821cdc1faf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d64432441de6581d42731337131b798d76095db2b39e9de252315821cdc1faf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d64432441de6581d42731337131b798d76095db2b39e9de252315821cdc1faf/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.knpqas supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:04 compute-1 podman[85202]: 2025-11-25 09:34:04.500001119 +0000 UTC m=+0.069394420 container init 9d4f314109b600691c35dbe55a0cd7da0b40fdaeb234728ced65696df0c91ce3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mds-cephfs-compute-1-knpqas, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:34:04 compute-1 podman[85202]: 2025-11-25 09:34:04.503672628 +0000 UTC m=+0.073065899 container start 9d4f314109b600691c35dbe55a0cd7da0b40fdaeb234728ced65696df0c91ce3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mds-cephfs-compute-1-knpqas, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:34:04 compute-1 bash[85202]: 9d4f314109b600691c35dbe55a0cd7da0b40fdaeb234728ced65696df0c91ce3
Nov 25 09:34:04 compute-1 podman[85202]: 2025-11-25 09:34:04.446621548 +0000 UTC m=+0.016014839 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:04 compute-1 systemd[1]: Started Ceph mds.cephfs.compute-1.knpqas for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:34:04 compute-1 ceph-mds[85218]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 09:34:04 compute-1 ceph-mds[85218]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Nov 25 09:34:04 compute-1 ceph-mds[85218]: main not setting numa affinity
Nov 25 09:34:04 compute-1 ceph-mds[85218]: pidfile_write: ignore empty --pid-file
Nov 25 09:34:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mds-cephfs-compute-1-knpqas[85214]: starting mds.cephfs.compute-1.knpqas at 
Nov 25 09:34:04 compute-1 sudo[85002]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:04 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Updating MDS map to version 5 from mon.2
Nov 25 09:34:05 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).mds e6 new map
Nov 25 09:34:05 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).mds e6 print_map
                                           e6
                                           btime 2025-11-25T09:34:05:420267+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T09:33:52.871685+0000
                                           modified        2025-11-25T09:34:03.638490+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14601}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 14601 members: 14601
                                           [mds.cephfs.compute-2.pwazzx{0:14601} state up:active seq 2 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.knpqas{-1:24293} state up:standby seq 1 addr [v2:192.168.122.101:6804/1211782045,v1:192.168.122.101:6805/1211782045] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-0.wjveyw{-1:24295} state up:standby seq 1 addr [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] compat {c=[1],r=[1],i=[1fff]}]
Nov 25 09:34:05 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Updating MDS map to version 6 from mon.2
Nov 25 09:34:05 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Monitors have assigned me to become a standby
Nov 25 09:34:05 compute-1 ceph-mon[79643]: pgmap v13: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 14 op/s
Nov 25 09:34:05 compute-1 ceph-mon[79643]: from='client.14613 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:34:05 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:05 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:05 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:05 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:05 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:05 compute-1 ceph-mon[79643]: Deploying daemon alertmanager.compute-0 on compute-0
Nov 25 09:34:05 compute-1 ceph-mon[79643]: mds.? [v2:192.168.122.101:6804/1211782045,v1:192.168.122.101:6805/1211782045] up:boot
Nov 25 09:34:05 compute-1 ceph-mon[79643]: fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:active} 2 up:standby
Nov 25 09:34:05 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.knpqas"}]: dispatch
Nov 25 09:34:06 compute-1 ceph-mon[79643]: from='client.14619 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:34:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).mds e7 new map
Nov 25 09:34:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).mds e7 print_map
                                           e7
                                           btime 2025-11-25T09:34:07:562567+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T09:33:52.871685+0000
                                           modified        2025-11-25T09:34:06.658104+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14601}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 14601 members: 14601
                                           [mds.cephfs.compute-2.pwazzx{0:14601} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.knpqas{-1:24293} state up:standby seq 1 addr [v2:192.168.122.101:6804/1211782045,v1:192.168.122.101:6805/1211782045] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-0.wjveyw{-1:24295} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] compat {c=[1],r=[1],i=[1fff]}]
Nov 25 09:34:07 compute-1 ceph-mon[79643]: pgmap v14: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 12 op/s
Nov 25 09:34:07 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 25 09:34:07 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:08 compute-1 ceph-mon[79643]: from='client.14625 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:34:08 compute-1 ceph-mon[79643]: Regenerating cephadm self-signed grafana TLS certificates
Nov 25 09:34:08 compute-1 ceph-mon[79643]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 25 09:34:08 compute-1 ceph-mon[79643]: Deploying daemon grafana.compute-0 on compute-0
Nov 25 09:34:08 compute-1 ceph-mon[79643]: mds.? [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] up:active
Nov 25 09:34:08 compute-1 ceph-mon[79643]: mds.? [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] up:standby
Nov 25 09:34:08 compute-1 ceph-mon[79643]: fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:active} 2 up:standby
Nov 25 09:34:08 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3129853501' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 09:34:08 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).mds e8 new map
Nov 25 09:34:08 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).mds e8 print_map
                                           e8
                                           btime 2025-11-25T09:34:08:572718+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T09:33:52.871685+0000
                                           modified        2025-11-25T09:34:06.658104+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14601}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 14601 members: 14601
                                           [mds.cephfs.compute-2.pwazzx{0:14601} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.knpqas{-1:24293} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1211782045,v1:192.168.122.101:6805/1211782045] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-0.wjveyw{-1:24295} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] compat {c=[1],r=[1],i=[1fff]}]
Nov 25 09:34:08 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Updating MDS map to version 8 from mon.2
Nov 25 09:34:09 compute-1 ceph-mon[79643]: pgmap v15: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 11 op/s
Nov 25 09:34:09 compute-1 ceph-mon[79643]: mds.? [v2:192.168.122.101:6804/1211782045,v1:192.168.122.101:6805/1211782045] up:standby
Nov 25 09:34:09 compute-1 ceph-mon[79643]: fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:active} 2 up:standby
Nov 25 09:34:09 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1994516596' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 09:34:11 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Nov 25 09:34:11 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1803730132' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 25 09:34:11 compute-1 ceph-mon[79643]: pgmap v16: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 9 op/s
Nov 25 09:34:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1803730132' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 25 09:34:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:12 compute-1 ceph-mon[79643]: pgmap v17: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 09:34:12 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:14 compute-1 ceph-mon[79643]: pgmap v18: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 09:34:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/580932889' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 25 09:34:15 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:15 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:15 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:15 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:15 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:15 compute-1 ceph-mon[79643]: Deploying daemon haproxy.rgw.default.compute-0.jgcdmc on compute-0
Nov 25 09:34:17 compute-1 ceph-mon[79643]: pgmap v19: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 25 09:34:17 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:19 compute-1 ceph-mon[79643]: pgmap v20: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 25 09:34:21 compute-1 ceph-mon[79643]: pgmap v21: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 25 09:34:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.002000018s ======
Nov 25 09:34:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:21.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000018s
Nov 25 09:34:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:22 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:22 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:22 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:22 compute-1 ceph-mon[79643]: Deploying daemon haproxy.rgw.default.compute-2.jrahab on compute-2
Nov 25 09:34:23 compute-1 ceph-mon[79643]: pgmap v22: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:34:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:23.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:34:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:25.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:25 compute-1 ceph-mon[79643]: pgmap v23: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:25 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:25 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:25 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:25 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:25 compute-1 ceph-mon[79643]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:34:25 compute-1 ceph-mon[79643]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:34:25 compute-1 ceph-mon[79643]: Deploying daemon keepalived.rgw.default.compute-2.aswfow on compute-2
Nov 25 09:34:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:25.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:27.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:27 compute-1 ceph-mon[79643]: pgmap v24: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:27.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:29.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:29 compute-1 ceph-mon[79643]: pgmap v25: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:29.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:30 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:30 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:30 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:31.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:31.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:31 compute-1 ceph-mon[79643]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:34:31 compute-1 ceph-mon[79643]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:34:31 compute-1 ceph-mon[79643]: Deploying daemon keepalived.rgw.default.compute-0.ulmpfs on compute-0
Nov 25 09:34:31 compute-1 ceph-mon[79643]: pgmap v26: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:32 compute-1 ceph-mon[79643]: pgmap v27: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:33.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:34:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:33.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:34:34 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:34 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:34 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:34 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:35.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:35 compute-1 ceph-mon[79643]: pgmap v28: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:35 compute-1 ceph-mon[79643]: Deploying daemon prometheus.compute-0 on compute-0
Nov 25 09:34:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:35.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:37.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:34:37.649878) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277649987, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 5430, "num_deletes": 256, "total_data_size": 15946287, "memory_usage": 16834800, "flush_reason": "Manual Compaction"}
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277670822, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 10053635, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 5435, "table_properties": {"data_size": 10033610, "index_size": 12487, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6469, "raw_key_size": 62140, "raw_average_key_size": 24, "raw_value_size": 9983568, "raw_average_value_size": 3883, "num_data_blocks": 554, "num_entries": 2571, "num_filter_entries": 2571, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 1764063177, "file_creation_time": 1764063277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 20974 microseconds, and 12795 cpu microseconds.
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:34:37.670869) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 10053635 bytes OK
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:34:37.670890) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:34:37.671279) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:34:37.671292) EVENT_LOG_v1 {"time_micros": 1764063277671288, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:34:37.671307) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 15917129, prev total WAL file size 15917129, number of live WAL files 2.
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:34:37.673159) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(9818KB) 8(1773B)]
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277673245, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 10055408, "oldest_snapshot_seqno": -1}
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 2319 keys, 10050360 bytes, temperature: kUnknown
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277692881, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 10050360, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10030916, "index_size": 12551, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 5829, "raw_key_size": 58649, "raw_average_key_size": 25, "raw_value_size": 9983973, "raw_average_value_size": 4305, "num_data_blocks": 555, "num_entries": 2319, "num_filter_entries": 2319, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764063277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:34:37.693165) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 10050360 bytes
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:34:37.693919) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 507.8 rd, 507.5 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(9.6, 0.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2576, records dropped: 257 output_compression: NoCompression
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:34:37.693939) EVENT_LOG_v1 {"time_micros": 1764063277693930, "job": 4, "event": "compaction_finished", "compaction_time_micros": 19802, "compaction_time_cpu_micros": 13743, "output_level": 6, "num_output_files": 1, "total_output_size": 10050360, "num_input_records": 2576, "num_output_records": 2319, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277695542, "job": 4, "event": "table_file_deletion", "file_number": 14}
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277695736, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 25 09:34:37 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:34:37.673072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:34:37 compute-1 ceph-mon[79643]: pgmap v29: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:37 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:37.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:39.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:39 compute-1 ceph-mon[79643]: pgmap v30: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:39 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:39 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:39 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:39 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr respawn  1: '-n'
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr respawn  2: 'mgr.compute-1.plffrn'
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr respawn  3: '-f'
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr respawn  4: '--setuser'
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr respawn  5: 'ceph'
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr respawn  6: '--setgroup'
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr respawn  7: 'ceph'
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr respawn  8: '--default-log-to-file=false'
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr respawn  9: '--default-log-to-journald=true'
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 25 09:34:39 compute-1 sshd-session[83696]: Connection closed by 192.168.122.100 port 50276
Nov 25 09:34:39 compute-1 sshd-session[83693]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:34:39 compute-1 systemd[1]: session-33.scope: Deactivated successfully.
Nov 25 09:34:39 compute-1 systemd[1]: session-33.scope: Consumed 3.954s CPU time.
Nov 25 09:34:39 compute-1 systemd-logind[746]: Session 33 logged out. Waiting for processes to exit.
Nov 25 09:34:39 compute-1 systemd-logind[746]: Removed session 33.
Nov 25 09:34:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: ignoring --setuser ceph since I am not root
Nov 25 09:34:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: ignoring --setgroup ceph since I am not root
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: pidfile_write: ignore empty --pid-file
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'alerts'
Nov 25 09:34:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:39.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:39.793+0000 7fe543a4c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'balancer'
Nov 25 09:34:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:39.864+0000 7fe543a4c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:34:39 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'cephadm'
Nov 25 09:34:40 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'crash'
Nov 25 09:34:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:40.526+0000 7fe543a4c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:34:40 compute-1 ceph-mgr[79928]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:34:40 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'dashboard'
Nov 25 09:34:40 compute-1 ceph-mon[79643]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Nov 25 09:34:40 compute-1 ceph-mon[79643]: mgrmap e25: compute-0.zcfgby(active, since 47s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:34:40 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'devicehealth'
Nov 25 09:34:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:41.060+0000 7fe543a4c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-1 ceph-mgr[79928]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 09:34:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:34:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:41.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:34:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 09:34:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 09:34:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]:   from numpy import show_config as show_numpy_config
Nov 25 09:34:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:41.201+0000 7fe543a4c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-1 ceph-mgr[79928]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'influx'
Nov 25 09:34:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:41.263+0000 7fe543a4c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-1 ceph-mgr[79928]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'insights'
Nov 25 09:34:41 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'iostat'
Nov 25 09:34:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:41.381+0000 7fe543a4c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-1 ceph-mgr[79928]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'k8sevents'
Nov 25 09:34:41 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'localpool'
Nov 25 09:34:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:41.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:41 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 09:34:41 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'mirroring'
Nov 25 09:34:42 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'nfs'
Nov 25 09:34:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:42.223+0000 7fe543a4c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-1 ceph-mgr[79928]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'orchestrator'
Nov 25 09:34:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:42.411+0000 7fe543a4c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-1 ceph-mgr[79928]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 09:34:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:42.476+0000 7fe543a4c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-1 ceph-mgr[79928]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'osd_support'
Nov 25 09:34:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:42.535+0000 7fe543a4c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-1 ceph-mgr[79928]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 09:34:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:42.602+0000 7fe543a4c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-1 ceph-mgr[79928]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'progress'
Nov 25 09:34:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:42.664+0000 7fe543a4c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-1 ceph-mgr[79928]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'prometheus'
Nov 25 09:34:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:42.957+0000 7fe543a4c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-1 ceph-mgr[79928]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'rbd_support'
Nov 25 09:34:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:43.041+0000 7fe543a4c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-1 ceph-mgr[79928]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'restful'
Nov 25 09:34:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:43.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:43 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'rgw'
Nov 25 09:34:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:43.413+0000 7fe543a4c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-1 ceph-mgr[79928]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'rook'
Nov 25 09:34:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:34:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:43.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:34:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:43.892+0000 7fe543a4c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-1 ceph-mgr[79928]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'selftest'
Nov 25 09:34:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:43.954+0000 7fe543a4c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-1 ceph-mgr[79928]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'snap_schedule'
Nov 25 09:34:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:44.025+0000 7fe543a4c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'stats'
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'status'
Nov 25 09:34:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:44.152+0000 7fe543a4c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'telegraf'
Nov 25 09:34:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:44.213+0000 7fe543a4c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'telemetry'
Nov 25 09:34:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:44.346+0000 7fe543a4c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 09:34:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:44.535+0000 7fe543a4c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'volumes'
Nov 25 09:34:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:44.762+0000 7fe543a4c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr[py] Loading python module 'zabbix'
Nov 25 09:34:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 2025-11-25T09:34:44.822+0000 7fe543a4c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: ms_deliver_dispatch: unhandled message 0x56084901d860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr load Constructed class from module: dashboard
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: mgr load Constructed class from module: prometheus
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: [dashboard INFO root] Starting engine...
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: [prometheus INFO root] server_addr: :: server_port: 9283
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: [prometheus INFO root] Starting engine...
Nov 25 09:34:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: [25/Nov/2025:09:34:44] ENGINE Bus STARTING
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: [prometheus INFO cherrypy.error] [25/Nov/2025:09:34:44] ENGINE Bus STARTING
Nov 25 09:34:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: CherryPy Checker:
Nov 25 09:34:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: The Application mounted at '' has an empty config.
Nov 25 09:34:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: 
Nov 25 09:34:44 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Nov 25 09:34:44 compute-1 ceph-mon[79643]: Standby manager daemon compute-1.plffrn restarted
Nov 25 09:34:44 compute-1 ceph-mon[79643]: Standby manager daemon compute-1.plffrn started
Nov 25 09:34:44 compute-1 ceph-mon[79643]: Active manager daemon compute-0.zcfgby restarted
Nov 25 09:34:44 compute-1 ceph-mon[79643]: Activating manager daemon compute-0.zcfgby
Nov 25 09:34:44 compute-1 ceph-mon[79643]: osdmap e42: 3 total, 3 up, 3 in
Nov 25 09:34:44 compute-1 ceph-mon[79643]: mgrmap e26: compute-0.zcfgby(active, starting, since 0.0237431s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:34:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:34:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:34:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:34:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.pwazzx"}]: dispatch
Nov 25 09:34:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.knpqas"}]: dispatch
Nov 25 09:34:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.wjveyw"}]: dispatch
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: [dashboard INFO root] Engine started...
Nov 25 09:34:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: [25/Nov/2025:09:34:44] ENGINE Serving on http://:::9283
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: [prometheus INFO cherrypy.error] [25/Nov/2025:09:34:44] ENGINE Serving on http://:::9283
Nov 25 09:34:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-1-plffrn[79924]: [25/Nov/2025:09:34:44] ENGINE Bus STARTED
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: [prometheus INFO cherrypy.error] [25/Nov/2025:09:34:44] ENGINE Bus STARTED
Nov 25 09:34:44 compute-1 ceph-mgr[79928]: [prometheus INFO root] Engine started.
Nov 25 09:34:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:45.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:45 compute-1 sshd-session[85294]: Accepted publickey for ceph-admin from 192.168.122.100 port 42516 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:34:45 compute-1 systemd-logind[746]: New session 34 of user ceph-admin.
Nov 25 09:34:45 compute-1 systemd[1]: Started Session 34 of User ceph-admin.
Nov 25 09:34:45 compute-1 sshd-session[85294]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:34:45 compute-1 sudo[85298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:45 compute-1 sudo[85298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:45 compute-1 sudo[85298]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:45 compute-1 sudo[85323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:34:45 compute-1 sudo[85323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:45.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:45 compute-1 podman[85404]: 2025-11-25 09:34:45.830170484 +0000 UTC m=+0.038044900 container exec 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:34:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"}]: dispatch
Nov 25 09:34:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"}]: dispatch
Nov 25 09:34:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"}]: dispatch
Nov 25 09:34:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 09:34:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:34:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 09:34:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:34:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:34:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:34:45 compute-1 ceph-mon[79643]: Manager daemon compute-0.zcfgby is now available
Nov 25 09:34:45 compute-1 ceph-mon[79643]: Standby manager daemon compute-2.flybft restarted
Nov 25 09:34:45 compute-1 ceph-mon[79643]: Standby manager daemon compute-2.flybft started
Nov 25 09:34:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:34:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 09:34:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 09:34:45 compute-1 podman[85404]: 2025-11-25 09:34:45.916638183 +0000 UTC m=+0.124512580 container exec_died 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:34:46 compute-1 podman[85514]: 2025-11-25 09:34:46.264543204 +0000 UTC m=+0.033994957 container exec 2a1e927df99a4f6883dd678d1b8ad8ebba4024ed0429ba56ea266629e5f2b0b3 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:46 compute-1 podman[85514]: 2025-11-25 09:34:46.268133621 +0000 UTC m=+0.037585364 container exec_died 2a1e927df99a4f6883dd678d1b8ad8ebba4024ed0429ba56ea266629e5f2b0b3 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:46 compute-1 sudo[85323]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:46 compute-1 sudo[85562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:46 compute-1 sudo[85562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:46 compute-1 sudo[85562]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:46 compute-1 sudo[85587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:34:46 compute-1 sudo[85587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:46 compute-1 sudo[85587]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:46 compute-1 ceph-mon[79643]: mgrmap e27: compute-0.zcfgby(active, since 1.06329s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:34:46 compute-1 ceph-mon[79643]: [25/Nov/2025:09:34:46] ENGINE Bus STARTING
Nov 25 09:34:46 compute-1 ceph-mon[79643]: [25/Nov/2025:09:34:46] ENGINE Serving on http://192.168.122.100:8765
Nov 25 09:34:46 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:46 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:46 compute-1 ceph-mon[79643]: [25/Nov/2025:09:34:46] ENGINE Serving on https://192.168.122.100:7150
Nov 25 09:34:46 compute-1 ceph-mon[79643]: [25/Nov/2025:09:34:46] ENGINE Bus STARTED
Nov 25 09:34:46 compute-1 ceph-mon[79643]: [25/Nov/2025:09:34:46] ENGINE Client ('192.168.122.100', 39184) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 09:34:46 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:46 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:46 compute-1 sudo[85641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:46 compute-1 sudo[85641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:46 compute-1 sudo[85641]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:46 compute-1 sudo[85666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 25 09:34:46 compute-1 sudo[85666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:47.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:47 compute-1 sudo[85666]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:47.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:48 compute-1 sudo[85707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:34:48 compute-1 sudo[85707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[85707]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[85732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:34:48 compute-1 sudo[85732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[85732]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[85757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:34:48 compute-1 sudo[85757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[85757]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 ceph-mon[79643]: pgmap v4: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:34:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:34:48 compute-1 ceph-mon[79643]: mgrmap e28: compute-0.zcfgby(active, since 2s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:34:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:34:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:34:48 compute-1 sudo[85782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:48 compute-1 sudo[85782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[85782]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[85807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:34:48 compute-1 sudo[85807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[85807]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[85855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:34:48 compute-1 sudo[85855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[85855]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[85880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:34:48 compute-1 sudo[85880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[85880]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[85905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 25 09:34:48 compute-1 sudo[85905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[85905]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[85930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:34:48 compute-1 sudo[85930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[85930]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[85955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:34:48 compute-1 sudo[85955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[85955]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[85980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:34:48 compute-1 sudo[85980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[85980]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[86005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:48 compute-1 sudo[86005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[86005]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[86030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:34:48 compute-1 sudo[86030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[86030]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[86078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:34:48 compute-1 sudo[86078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[86078]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[86103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:34:48 compute-1 sudo[86103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[86103]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[86128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:34:48 compute-1 sudo[86128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[86128]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[86153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:34:48 compute-1 sudo[86153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[86153]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[86178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:34:48 compute-1 sudo[86178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[86178]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[86203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:34:48 compute-1 sudo[86203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[86203]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-1 sudo[86228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:48 compute-1 sudo[86228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-1 sudo[86228]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-1 sudo[86253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:34:49 compute-1 sudo[86253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-1 sudo[86253]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-1 sudo[86301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:34:49 compute-1 sudo[86301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:49.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:49 compute-1 sudo[86301]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-1 sudo[86326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:34:49 compute-1 sudo[86326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-1 sudo[86326]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-1 sudo[86351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 25 09:34:49 compute-1 sudo[86351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-1 sudo[86351]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-1 ceph-mon[79643]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:34:49 compute-1 ceph-mon[79643]: Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:34:49 compute-1 ceph-mon[79643]: Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:34:49 compute-1 ceph-mon[79643]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:34:49 compute-1 ceph-mon[79643]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:34:49 compute-1 ceph-mon[79643]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:34:49 compute-1 sudo[86376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:34:49 compute-1 sudo[86376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-1 sudo[86376]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-1 sudo[86401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:34:49 compute-1 sudo[86401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-1 sudo[86401]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-1 sudo[86426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:34:49 compute-1 sudo[86426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-1 sudo[86426]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-1 sudo[86451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:49 compute-1 sudo[86451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-1 sudo[86451]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-1 sudo[86476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:34:49 compute-1 sudo[86476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-1 sudo[86476]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-1 sudo[86524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:34:49 compute-1 sudo[86524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-1 sudo[86524]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-1 sudo[86549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:34:49 compute-1 sudo[86549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-1 sudo[86549]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-1 sudo[86574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:34:49 compute-1 sudo[86574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-1 sudo[86574]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:49.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:49 compute-1 sudo[86599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:49 compute-1 sudo[86599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-1 sudo[86599]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-1 sudo[86624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:49 compute-1 sudo[86624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:50 compute-1 podman[86683]: 2025-11-25 09:34:50.210029703 +0000 UTC m=+0.025486029 container create 67911691f963199fe39d5959365553fada9786ef0c77f1d11061f09b0964d657 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 25 09:34:50 compute-1 ceph-mon[79643]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:34:50 compute-1 ceph-mon[79643]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:34:50 compute-1 ceph-mon[79643]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:34:50 compute-1 ceph-mon[79643]: pgmap v5: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:50 compute-1 ceph-mon[79643]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:34:50 compute-1 ceph-mon[79643]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:34:50 compute-1 ceph-mon[79643]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:34:50 compute-1 ceph-mon[79643]: mgrmap e29: compute-0.zcfgby(active, since 4s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-1 ceph-mon[79643]: Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress
                                           service_id: nfs.cephfs
                                           service_name: ingress.nfs.cephfs
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           spec:
                                             backend_service: nfs.cephfs
                                             enable_haproxy_protocol: true
                                             first_virtual_router_id: 50
                                             frontend_port: 2049
                                             monitor_port: 9049
                                             virtual_ip: 192.168.122.2/24
                                           ''')): max() arg is an empty sequence
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services
                                               if self._apply_service(spec):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service
                                               daemon_spec = svc.prepare_create(daemon_spec)
                                             File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create
                                               return self.haproxy_prepare_create(daemon_spec)
                                             File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create
                                               daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)
                                             File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config
                                               num_ranks = 1 + max(by_rank.keys())
                                           ValueError: max() arg is an empty sequence
Nov 25 09:34:50 compute-1 ceph-mon[79643]: pgmap v6: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-1 ceph-mon[79643]: Creating key for client.nfs.cephfs.0.0.compute-1.yfzsxe
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 25 09:34:50 compute-1 ceph-mon[79643]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-1 systemd[1]: Started libpod-conmon-67911691f963199fe39d5959365553fada9786ef0c77f1d11061f09b0964d657.scope.
Nov 25 09:34:50 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:34:50 compute-1 podman[86683]: 2025-11-25 09:34:50.248817562 +0000 UTC m=+0.064273908 container init 67911691f963199fe39d5959365553fada9786ef0c77f1d11061f09b0964d657 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:34:50 compute-1 podman[86683]: 2025-11-25 09:34:50.254065223 +0000 UTC m=+0.069521539 container start 67911691f963199fe39d5959365553fada9786ef0c77f1d11061f09b0964d657 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poincare, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:34:50 compute-1 podman[86683]: 2025-11-25 09:34:50.255253984 +0000 UTC m=+0.070710331 container attach 67911691f963199fe39d5959365553fada9786ef0c77f1d11061f09b0964d657 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poincare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:50 compute-1 hopeful_poincare[86696]: 167 167
Nov 25 09:34:50 compute-1 systemd[1]: libpod-67911691f963199fe39d5959365553fada9786ef0c77f1d11061f09b0964d657.scope: Deactivated successfully.
Nov 25 09:34:50 compute-1 podman[86683]: 2025-11-25 09:34:50.258359337 +0000 UTC m=+0.073815663 container died 67911691f963199fe39d5959365553fada9786ef0c77f1d11061f09b0964d657 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:34:50 compute-1 systemd[1]: var-lib-containers-storage-overlay-6f3ec50330e653d8e40f877ba1c755607b72f319fa0a44e828b66e2a88664597-merged.mount: Deactivated successfully.
Nov 25 09:34:50 compute-1 podman[86683]: 2025-11-25 09:34:50.275804259 +0000 UTC m=+0.091260585 container remove 67911691f963199fe39d5959365553fada9786ef0c77f1d11061f09b0964d657 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poincare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:34:50 compute-1 podman[86683]: 2025-11-25 09:34:50.199179602 +0000 UTC m=+0.014635948 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:50 compute-1 systemd[1]: libpod-conmon-67911691f963199fe39d5959365553fada9786ef0c77f1d11061f09b0964d657.scope: Deactivated successfully.
Nov 25 09:34:50 compute-1 systemd[1]: Reloading.
Nov 25 09:34:50 compute-1 systemd-rc-local-generator[86735]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:50 compute-1 systemd-sysv-generator[86738]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:50 compute-1 systemd[1]: Reloading.
Nov 25 09:34:50 compute-1 systemd-rc-local-generator[86771]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:50 compute-1 systemd-sysv-generator[86776]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:50 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:34:50 compute-1 podman[86827]: 2025-11-25 09:34:50.852489587 +0000 UTC m=+0.028641346 container create 1040e2179a4cb5b367260603abada9a3d1cccac114aa8c72f1d5923452218622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:34:50 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaea8e95a4c66288ac689f8807cd78d131ce37c0064a04a0cfdabd95ed1a91c7/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:50 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaea8e95a4c66288ac689f8807cd78d131ce37c0064a04a0cfdabd95ed1a91c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:50 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaea8e95a4c66288ac689f8807cd78d131ce37c0064a04a0cfdabd95ed1a91c7/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:50 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaea8e95a4c66288ac689f8807cd78d131ce37c0064a04a0cfdabd95ed1a91c7/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.yfzsxe-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:50 compute-1 podman[86827]: 2025-11-25 09:34:50.888371569 +0000 UTC m=+0.064523358 container init 1040e2179a4cb5b367260603abada9a3d1cccac114aa8c72f1d5923452218622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:34:50 compute-1 podman[86827]: 2025-11-25 09:34:50.893306431 +0000 UTC m=+0.069458190 container start 1040e2179a4cb5b367260603abada9a3d1cccac114aa8c72f1d5923452218622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:50 compute-1 bash[86827]: 1040e2179a4cb5b367260603abada9a3d1cccac114aa8c72f1d5923452218622
Nov 25 09:34:50 compute-1 podman[86827]: 2025-11-25 09:34:50.841817241 +0000 UTC m=+0.017969010 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:50 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:34:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:50 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:34:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:50 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:34:50 compute-1 sudo[86624]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:50 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:34:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:50 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:34:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:50 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:34:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:50 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:34:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:51 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:34:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:51 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:34:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:51 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 25 09:34:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:51 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 25 09:34:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:51 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:34:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:51 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:34:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:51.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:51 compute-1 ceph-mon[79643]: Rados config object exists: conf-nfs.cephfs
Nov 25 09:34:51 compute-1 ceph-mon[79643]: Creating key for client.nfs.cephfs.0.0.compute-1.yfzsxe-rgw
Nov 25 09:34:51 compute-1 ceph-mon[79643]: Bind address in nfs.cephfs.0.0.compute-1.yfzsxe's ganesha conf is defaulting to empty
Nov 25 09:34:51 compute-1 ceph-mon[79643]: Deploying daemon nfs.cephfs.0.0.compute-1.yfzsxe on compute-1
Nov 25 09:34:51 compute-1 ceph-mon[79643]: Health check failed: Failed to apply 1 service(s): ingress.nfs.cephfs (CEPHADM_APPLY_SPEC_FAIL)
Nov 25 09:34:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 25 09:34:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 25 09:34:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 25 09:34:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 25 09:34:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 25 09:34:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 25 09:34:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:34:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:34:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:34:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:51.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:34:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:52 compute-1 ceph-mon[79643]: Creating key for client.nfs.cephfs.1.0.compute-2.jouchy
Nov 25 09:34:52 compute-1 ceph-mon[79643]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 25 09:34:52 compute-1 ceph-mon[79643]: Rados config object exists: conf-nfs.cephfs
Nov 25 09:34:52 compute-1 ceph-mon[79643]: Creating key for client.nfs.cephfs.1.0.compute-2.jouchy-rgw
Nov 25 09:34:52 compute-1 ceph-mon[79643]: Bind address in nfs.cephfs.1.0.compute-2.jouchy's ganesha conf is defaulting to empty
Nov 25 09:34:52 compute-1 ceph-mon[79643]: Deploying daemon nfs.cephfs.1.0.compute-2.jouchy on compute-2
Nov 25 09:34:52 compute-1 ceph-mon[79643]: pgmap v7: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 355 B/s wr, 13 op/s
Nov 25 09:34:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:53.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:53 compute-1 ceph-mon[79643]: Creating key for client.nfs.cephfs.2.0.compute-0.rychik
Nov 25 09:34:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 25 09:34:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 25 09:34:53 compute-1 ceph-mon[79643]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 25 09:34:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 25 09:34:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 25 09:34:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:53.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000001:nfs.cephfs.0: -2
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:34:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:54 : epoch 6925783a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:34:54 compute-1 ceph-mon[79643]: pgmap v8: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 264 B/s wr, 9 op/s
Nov 25 09:34:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 25 09:34:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 25 09:34:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:34:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:34:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:55.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:55 compute-1 ceph-mon[79643]: Rados config object exists: conf-nfs.cephfs
Nov 25 09:34:55 compute-1 ceph-mon[79643]: Creating key for client.nfs.cephfs.2.0.compute-0.rychik-rgw
Nov 25 09:34:55 compute-1 ceph-mon[79643]: Bind address in nfs.cephfs.2.0.compute-0.rychik's ganesha conf is defaulting to empty
Nov 25 09:34:55 compute-1 ceph-mon[79643]: Deploying daemon nfs.cephfs.2.0.compute-0.rychik on compute-0
Nov 25 09:34:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:55 : epoch 6925783a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:34:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:55 : epoch 6925783a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:34:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:55 : epoch 6925783a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:34:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:55 : epoch 6925783a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:34:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:34:55 : epoch 6925783a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:34:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:55.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:56 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:56 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:56 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:56 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:56 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:34:56 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:34:56 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:56 compute-1 ceph-mon[79643]: pgmap v9: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 209 B/s wr, 7 op/s
Nov 25 09:34:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:57.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:57 compute-1 ceph-mon[79643]: pgmap v10: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.9 KiB/s wr, 17 op/s
Nov 25 09:34:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:57.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:58 compute-1 sudo[86894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:34:58 compute-1 sudo[86894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:58 compute-1 sudo[86894]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:58 compute-1 sudo[86919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:34:58 compute-1 sudo[86919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:58 compute-1 sudo[86919]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:58 compute-1 sudo[86944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:58 compute-1 sudo[86944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:58 compute-1 sudo[86944]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:58 compute-1 sudo[86969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:34:58 compute-1 sudo[86969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:59.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:59 compute-1 podman[87051]: 2025-11-25 09:34:59.279949416 +0000 UTC m=+0.037696172 container exec 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:34:59 compute-1 podman[87051]: 2025-11-25 09:34:59.359583114 +0000 UTC m=+0.117329880 container exec_died 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:34:59 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:59 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:59 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:59 compute-1 podman[87158]: 2025-11-25 09:34:59.707763272 +0000 UTC m=+0.035525542 container exec 2a1e927df99a4f6883dd678d1b8ad8ebba4024ed0429ba56ea266629e5f2b0b3 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:59 compute-1 podman[87158]: 2025-11-25 09:34:59.714598446 +0000 UTC m=+0.042360695 container exec_died 2a1e927df99a4f6883dd678d1b8ad8ebba4024ed0429ba56ea266629e5f2b0b3 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:34:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:59 compute-1 podman[87218]: 2025-11-25 09:34:59.86975846 +0000 UTC m=+0.034578085 container exec 1040e2179a4cb5b367260603abada9a3d1cccac114aa8c72f1d5923452218622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:59 compute-1 podman[87218]: 2025-11-25 09:34:59.879723593 +0000 UTC m=+0.044543198 container exec_died 1040e2179a4cb5b367260603abada9a3d1cccac114aa8c72f1d5923452218622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:34:59 compute-1 sudo[86969]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:00 compute-1 ceph-mon[79643]: pgmap v11: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.9 KiB/s wr, 17 op/s
Nov 25 09:35:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:35:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:01.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:01 compute-1 sudo[87244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:01 compute-1 sudo[87244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:01 compute-1 sudo[87244]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:01 compute-1 sudo[87269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:35:01 compute-1 sudo[87269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:01.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:35:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:02 compute-1 ceph-mon[79643]: pgmap v12: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.9 KiB/s wr, 18 op/s
Nov 25 09:35:02 compute-1 ceph-mon[79643]: Deploying daemon haproxy.nfs.cephfs.compute-1.xlgqkq on compute-1
Nov 25 09:35:03 compute-1 ceph-mon[79643]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 1 service(s): ingress.nfs.cephfs)
Nov 25 09:35:03 compute-1 ceph-mon[79643]: Cluster is now healthy
Nov 25 09:35:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:03.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:03 compute-1 podman[87328]: 2025-11-25 09:35:03.604938573 +0000 UTC m=+2.165044005 container create f80035fe07e50e183817a3a15ccbc0534bc52c7354554c03aa812995a8cb0fe2 (image=quay.io/ceph/haproxy:2.3, name=happy_pascal)
Nov 25 09:35:03 compute-1 systemd[1]: Started libpod-conmon-f80035fe07e50e183817a3a15ccbc0534bc52c7354554c03aa812995a8cb0fe2.scope.
Nov 25 09:35:03 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:35:03 compute-1 podman[87328]: 2025-11-25 09:35:03.594881406 +0000 UTC m=+2.154986849 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 25 09:35:03 compute-1 podman[87328]: 2025-11-25 09:35:03.665728773 +0000 UTC m=+2.225834205 container init f80035fe07e50e183817a3a15ccbc0534bc52c7354554c03aa812995a8cb0fe2 (image=quay.io/ceph/haproxy:2.3, name=happy_pascal)
Nov 25 09:35:03 compute-1 podman[87328]: 2025-11-25 09:35:03.670501229 +0000 UTC m=+2.230606651 container start f80035fe07e50e183817a3a15ccbc0534bc52c7354554c03aa812995a8cb0fe2 (image=quay.io/ceph/haproxy:2.3, name=happy_pascal)
Nov 25 09:35:03 compute-1 podman[87328]: 2025-11-25 09:35:03.671683688 +0000 UTC m=+2.231789131 container attach f80035fe07e50e183817a3a15ccbc0534bc52c7354554c03aa812995a8cb0fe2 (image=quay.io/ceph/haproxy:2.3, name=happy_pascal)
Nov 25 09:35:03 compute-1 happy_pascal[87426]: 0 0
Nov 25 09:35:03 compute-1 systemd[1]: libpod-f80035fe07e50e183817a3a15ccbc0534bc52c7354554c03aa812995a8cb0fe2.scope: Deactivated successfully.
Nov 25 09:35:03 compute-1 conmon[87426]: conmon f80035fe07e50e183817 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f80035fe07e50e183817a3a15ccbc0534bc52c7354554c03aa812995a8cb0fe2.scope/container/memory.events
Nov 25 09:35:03 compute-1 podman[87328]: 2025-11-25 09:35:03.688008898 +0000 UTC m=+2.248114320 container died f80035fe07e50e183817a3a15ccbc0534bc52c7354554c03aa812995a8cb0fe2 (image=quay.io/ceph/haproxy:2.3, name=happy_pascal)
Nov 25 09:35:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-da1349cafd9e287addf3d1e398d9980ba7486276c8c045ccf003ce70ed2f4b8c-merged.mount: Deactivated successfully.
Nov 25 09:35:03 compute-1 podman[87328]: 2025-11-25 09:35:03.708397989 +0000 UTC m=+2.268503411 container remove f80035fe07e50e183817a3a15ccbc0534bc52c7354554c03aa812995a8cb0fe2 (image=quay.io/ceph/haproxy:2.3, name=happy_pascal)
Nov 25 09:35:03 compute-1 systemd[1]: libpod-conmon-f80035fe07e50e183817a3a15ccbc0534bc52c7354554c03aa812995a8cb0fe2.scope: Deactivated successfully.
Nov 25 09:35:03 compute-1 systemd[1]: Reloading.
Nov 25 09:35:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:03.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:03 compute-1 systemd-sysv-generator[87469]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:35:03 compute-1 systemd-rc-local-generator[87466]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:35:03 compute-1 systemd[1]: Reloading.
Nov 25 09:35:04 compute-1 systemd-rc-local-generator[87506]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:35:04 compute-1 systemd-sysv-generator[87509]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:35:04 compute-1 ceph-mon[79643]: pgmap v13: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 2.7 KiB/s wr, 12 op/s
Nov 25 09:35:04 compute-1 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-1.xlgqkq for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:35:04 compute-1 podman[87561]: 2025-11-25 09:35:04.331202375 +0000 UTC m=+0.026553672 container create 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 09:35:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d31834d2cb2c6dadf34409bd4de3b9fdd9dd7e6ffff2caf74282e2ec9c1b508/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:04 compute-1 podman[87561]: 2025-11-25 09:35:04.370603961 +0000 UTC m=+0.065955258 container init 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 09:35:04 compute-1 podman[87561]: 2025-11-25 09:35:04.375021307 +0000 UTC m=+0.070372604 container start 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 09:35:04 compute-1 bash[87561]: 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0
Nov 25 09:35:04 compute-1 podman[87561]: 2025-11-25 09:35:04.320242447 +0000 UTC m=+0.015593765 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 25 09:35:04 compute-1 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-1.xlgqkq for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:35:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [NOTICE] 328/093504 (2) : New worker #1 (4) forked
Nov 25 09:35:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:35:04 : epoch 6925783a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea6c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:04 compute-1 sudo[87269]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:05.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:05 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:05 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:05 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:05 compute-1 ceph-mon[79643]: Deploying daemon haproxy.nfs.cephfs.compute-0.lycwwd on compute-0
Nov 25 09:35:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[86839]: 25/11/2025 09:35:05 : epoch 6925783a : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55b7b7d16b30 fd 37 proxy ignored for local
Nov 25 09:35:05 compute-1 kernel: ganesha.nfsd[86888]: segfault at 50 ip 00007feb1a4a132e sp 00007feadeffc210 error 4 in libntirpc.so.5.8[7feb1a486000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 25 09:35:05 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:35:05 compute-1 systemd[1]: Created slice Slice /system/systemd-coredump.
Nov 25 09:35:05 compute-1 systemd[1]: Started Process Core Dump (PID 87586/UID 0).
Nov 25 09:35:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:35:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:05.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:35:06 compute-1 systemd-coredump[87587]: Process 86843 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 47:
                                                   #0  0x00007feb1a4a132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Nov 25 09:35:06 compute-1 systemd[1]: systemd-coredump@0-87586-0.service: Deactivated successfully.
Nov 25 09:35:06 compute-1 podman[87592]: 2025-11-25 09:35:06.547455811 +0000 UTC m=+0.017869312 container died 1040e2179a4cb5b367260603abada9a3d1cccac114aa8c72f1d5923452218622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:35:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-eaea8e95a4c66288ac689f8807cd78d131ce37c0064a04a0cfdabd95ed1a91c7-merged.mount: Deactivated successfully.
Nov 25 09:35:06 compute-1 podman[87592]: 2025-11-25 09:35:06.563969206 +0000 UTC m=+0.034382708 container remove 1040e2179a4cb5b367260603abada9a3d1cccac114aa8c72f1d5923452218622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Nov 25 09:35:06 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:35:06 compute-1 ceph-mon[79643]: pgmap v14: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 2.7 KiB/s wr, 12 op/s
Nov 25 09:35:06 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:06 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:06 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:06 compute-1 ceph-mon[79643]: Deploying daemon haproxy.nfs.cephfs.compute-2.flyakz on compute-2
Nov 25 09:35:06 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Failed with result 'exit-code'.
Nov 25 09:35:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:07.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:07 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:07 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:07 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:07 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:07 compute-1 ceph-mon[79643]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:35:07 compute-1 ceph-mon[79643]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 25 09:35:07 compute-1 ceph-mon[79643]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:35:07 compute-1 ceph-mon[79643]: Deploying daemon keepalived.nfs.cephfs.compute-0.kkgeot on compute-0
Nov 25 09:35:07 compute-1 ceph-mon[79643]: pgmap v15: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 2.7 KiB/s wr, 12 op/s
Nov 25 09:35:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:07.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:08 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:08 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:08 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:08 compute-1 ceph-mon[79643]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:35:08 compute-1 ceph-mon[79643]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:35:08 compute-1 ceph-mon[79643]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 25 09:35:08 compute-1 ceph-mon[79643]: Deploying daemon keepalived.nfs.cephfs.compute-2.opynes on compute-2
Nov 25 09:35:09 compute-1 sudo[87625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:09 compute-1 sudo[87625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:09 compute-1 sudo[87625]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:09.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:09 compute-1 sudo[87650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:35:09 compute-1 sudo[87650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:09.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:10 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:10 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:10 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:10 compute-1 ceph-mon[79643]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 25 09:35:10 compute-1 ceph-mon[79643]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:35:10 compute-1 ceph-mon[79643]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:35:10 compute-1 ceph-mon[79643]: Deploying daemon keepalived.nfs.cephfs.compute-1.adsqcr on compute-1
Nov 25 09:35:10 compute-1 ceph-mon[79643]: pgmap v16: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 178 B/s wr, 2 op/s
Nov 25 09:35:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:11.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:11.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:12 compute-1 ceph-mon[79643]: pgmap v17: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 178 B/s wr, 2 op/s
Nov 25 09:35:12 compute-1 podman[87709]: 2025-11-25 09:35:12.303771791 +0000 UTC m=+2.901597019 container create 3995200805d8fb253800ac8ba812bd0bd18c063d798bbae233762653556e51c2 (image=quay.io/ceph/keepalived:2.2.4, name=adoring_bhaskara, com.redhat.component=keepalived-container, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., vcs-type=git, name=keepalived, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph)
Nov 25 09:35:12 compute-1 podman[87709]: 2025-11-25 09:35:12.294307643 +0000 UTC m=+2.892132892 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 25 09:35:12 compute-1 systemd[1]: Started libpod-conmon-3995200805d8fb253800ac8ba812bd0bd18c063d798bbae233762653556e51c2.scope.
Nov 25 09:35:12 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:35:12 compute-1 podman[87709]: 2025-11-25 09:35:12.360261767 +0000 UTC m=+2.958087015 container init 3995200805d8fb253800ac8ba812bd0bd18c063d798bbae233762653556e51c2 (image=quay.io/ceph/keepalived:2.2.4, name=adoring_bhaskara, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, name=keepalived, io.openshift.expose-services=, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9)
Nov 25 09:35:12 compute-1 podman[87709]: 2025-11-25 09:35:12.36547321 +0000 UTC m=+2.963298438 container start 3995200805d8fb253800ac8ba812bd0bd18c063d798bbae233762653556e51c2 (image=quay.io/ceph/keepalived:2.2.4, name=adoring_bhaskara, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, release=1793, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, architecture=x86_64, description=keepalived for Ceph, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vcs-type=git, io.buildah.version=1.28.2)
Nov 25 09:35:12 compute-1 podman[87709]: 2025-11-25 09:35:12.366712245 +0000 UTC m=+2.964537473 container attach 3995200805d8fb253800ac8ba812bd0bd18c063d798bbae233762653556e51c2 (image=quay.io/ceph/keepalived:2.2.4, name=adoring_bhaskara, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.28.2, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=2.2.4, name=keepalived, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20)
Nov 25 09:35:12 compute-1 adoring_bhaskara[87790]: 0 0
Nov 25 09:35:12 compute-1 systemd[1]: libpod-3995200805d8fb253800ac8ba812bd0bd18c063d798bbae233762653556e51c2.scope: Deactivated successfully.
Nov 25 09:35:12 compute-1 conmon[87790]: conmon 3995200805d8fb253800 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3995200805d8fb253800ac8ba812bd0bd18c063d798bbae233762653556e51c2.scope/container/memory.events
Nov 25 09:35:12 compute-1 podman[87709]: 2025-11-25 09:35:12.371547309 +0000 UTC m=+2.969372537 container died 3995200805d8fb253800ac8ba812bd0bd18c063d798bbae233762653556e51c2 (image=quay.io/ceph/keepalived:2.2.4, name=adoring_bhaskara, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, distribution-scope=public, version=2.2.4, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Nov 25 09:35:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-90524bb074334b13403d72007236014df02ceec80ada637cea177659cdd75035-merged.mount: Deactivated successfully.
Nov 25 09:35:12 compute-1 podman[87709]: 2025-11-25 09:35:12.388903823 +0000 UTC m=+2.986729051 container remove 3995200805d8fb253800ac8ba812bd0bd18c063d798bbae233762653556e51c2 (image=quay.io/ceph/keepalived:2.2.4, name=adoring_bhaskara, io.buildah.version=1.28.2, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, release=1793, architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vcs-type=git, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived)
Nov 25 09:35:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/093512 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:35:12 compute-1 systemd[1]: libpod-conmon-3995200805d8fb253800ac8ba812bd0bd18c063d798bbae233762653556e51c2.scope: Deactivated successfully.
Nov 25 09:35:12 compute-1 systemd[1]: Reloading.
Nov 25 09:35:12 compute-1 systemd-rc-local-generator[87828]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:35:12 compute-1 systemd-sysv-generator[87834]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:35:12 compute-1 systemd[1]: Reloading.
Nov 25 09:35:12 compute-1 systemd-sysv-generator[87876]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:35:12 compute-1 systemd-rc-local-generator[87872]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:35:12 compute-1 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-1.adsqcr for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:35:13 compute-1 podman[87925]: 2025-11-25 09:35:13.087065707 +0000 UTC m=+0.032010827 container create 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., vcs-type=git, build-date=2023-02-22T09:23:20, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, architecture=x86_64, release=1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public)
Nov 25 09:35:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:13.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487c89508bd4cdbb6c8e57ca9fda1c1bccb551b17f0daf610e7dbea4dc233c47/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:13 compute-1 podman[87925]: 2025-11-25 09:35:13.13567056 +0000 UTC m=+0.080615670 container init 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, version=2.2.4, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, release=1793, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph)
Nov 25 09:35:13 compute-1 podman[87925]: 2025-11-25 09:35:13.139303818 +0000 UTC m=+0.084248929 container start 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, architecture=x86_64, release=1793, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, version=2.2.4, io.buildah.version=1.28.2, name=keepalived, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=)
Nov 25 09:35:13 compute-1 bash[87925]: 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e
Nov 25 09:35:13 compute-1 podman[87925]: 2025-11-25 09:35:13.073806646 +0000 UTC m=+0.018751766 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 25 09:35:13 compute-1 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-1.adsqcr for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:35:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr[87937]: Tue Nov 25 09:35:13 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 25 09:35:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr[87937]: Tue Nov 25 09:35:13 2025: Running on Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 (built for Linux 5.14.0)
Nov 25 09:35:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr[87937]: Tue Nov 25 09:35:13 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 25 09:35:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr[87937]: Tue Nov 25 09:35:13 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 25 09:35:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr[87937]: Tue Nov 25 09:35:13 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 25 09:35:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr[87937]: Tue Nov 25 09:35:13 2025: Starting VRRP child process, pid=4
Nov 25 09:35:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr[87937]: Tue Nov 25 09:35:13 2025: Startup complete
Nov 25 09:35:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr[87937]: Tue Nov 25 09:35:13 2025: (VI_0) Entering BACKUP STATE (init)
Nov 25 09:35:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr[87937]: Tue Nov 25 09:35:13 2025: VRRP_Script(check_backend) succeeded
Nov 25 09:35:13 compute-1 sudo[87650]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:13.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:14 compute-1 ceph-mon[79643]: pgmap v18: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:35:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:35:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:35:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:15.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:15.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:35:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:15 compute-1 ceph-mon[79643]: pgmap v19: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:35:16 compute-1 sudo[87946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:35:16 compute-1 sudo[87946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:16 compute-1 sudo[87946]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:16 compute-1 sudo[87971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:16 compute-1 sudo[87971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:16 compute-1 sudo[87971]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:16 compute-1 sudo[87996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:35:16 compute-1 sudo[87996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:16 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Scheduled restart job, restart counter is at 1.
Nov 25 09:35:16 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:35:16 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:35:16 compute-1 podman[88076]: 2025-11-25 09:35:16.763647771 +0000 UTC m=+0.039656869 container exec 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 25 09:35:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr[87937]: Tue Nov 25 09:35:16 2025: (VI_0) Entering MASTER STATE
Nov 25 09:35:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr[87937]: Tue Nov 25 09:35:16 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Nov 25 09:35:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr[87937]: Tue Nov 25 09:35:16 2025: (VI_0) Entering BACKUP STATE
Nov 25 09:35:16 compute-1 podman[88076]: 2025-11-25 09:35:16.839864658 +0000 UTC m=+0.115873737 container exec_died 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Nov 25 09:35:16 compute-1 podman[88135]: 2025-11-25 09:35:16.891161997 +0000 UTC m=+0.031920226 container create 97b4060ec7cbc126fd630094c624ea203d3cf63dbbedc2dd06eeb5cecf3a3665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 09:35:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f2cd295f78725ebf252dc2f8b975e5882fc23e94b08a4e57f1b9ed19297003/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f2cd295f78725ebf252dc2f8b975e5882fc23e94b08a4e57f1b9ed19297003/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f2cd295f78725ebf252dc2f8b975e5882fc23e94b08a4e57f1b9ed19297003/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f2cd295f78725ebf252dc2f8b975e5882fc23e94b08a4e57f1b9ed19297003/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.yfzsxe-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:16 compute-1 podman[88135]: 2025-11-25 09:35:16.92475253 +0000 UTC m=+0.065510779 container init 97b4060ec7cbc126fd630094c624ea203d3cf63dbbedc2dd06eeb5cecf3a3665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:35:16 compute-1 podman[88135]: 2025-11-25 09:35:16.928976872 +0000 UTC m=+0.069735101 container start 97b4060ec7cbc126fd630094c624ea203d3cf63dbbedc2dd06eeb5cecf3a3665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:35:16 compute-1 bash[88135]: 97b4060ec7cbc126fd630094c624ea203d3cf63dbbedc2dd06eeb5cecf3a3665
Nov 25 09:35:16 compute-1 podman[88135]: 2025-11-25 09:35:16.879633387 +0000 UTC m=+0.020391616 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:16 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:35:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:16 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:35:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:16 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:35:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:16 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:35:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:16 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:35:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:16 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:35:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:16 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:35:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:17 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:35:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:17 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:35:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:17.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:17 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:17 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:17 compute-1 podman[88279]: 2025-11-25 09:35:17.197987969 +0000 UTC m=+0.034306393 container exec 2a1e927df99a4f6883dd678d1b8ad8ebba4024ed0429ba56ea266629e5f2b0b3 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:17 compute-1 podman[88279]: 2025-11-25 09:35:17.208553443 +0000 UTC m=+0.044871867 container exec_died 2a1e927df99a4f6883dd678d1b8ad8ebba4024ed0429ba56ea266629e5f2b0b3 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:17 compute-1 podman[88340]: 2025-11-25 09:35:17.36353445 +0000 UTC m=+0.034238425 container exec 97b4060ec7cbc126fd630094c624ea203d3cf63dbbedc2dd06eeb5cecf3a3665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:35:17 compute-1 podman[88340]: 2025-11-25 09:35:17.37264399 +0000 UTC m=+0.043347965 container exec_died 97b4060ec7cbc126fd630094c624ea203d3cf63dbbedc2dd06eeb5cecf3a3665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:35:17 compute-1 podman[88391]: 2025-11-25 09:35:17.506850731 +0000 UTC m=+0.032308308 container exec 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 09:35:17 compute-1 podman[88391]: 2025-11-25 09:35:17.515591445 +0000 UTC m=+0.041049043 container exec_died 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 09:35:17 compute-1 podman[88443]: 2025-11-25 09:35:17.648541567 +0000 UTC m=+0.032481524 container exec 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vcs-type=git, architecture=x86_64, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, distribution-scope=public, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 25 09:35:17 compute-1 podman[88443]: 2025-11-25 09:35:17.66058656 +0000 UTC m=+0.044526498 container exec_died 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, io.buildah.version=1.28.2, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=keepalived, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph)
Nov 25 09:35:17 compute-1 sudo[87996]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:17.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:17 compute-1 sshd-session[88470]: Accepted publickey for zuul from 192.168.122.30 port 35282 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:35:17 compute-1 systemd-logind[746]: New session 35 of user zuul.
Nov 25 09:35:18 compute-1 systemd[1]: Started Session 35 of User zuul.
Nov 25 09:35:18 compute-1 sshd-session[88470]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:35:18 compute-1 ceph-mon[79643]: pgmap v20: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:35:18 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:18 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:18 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:18 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:18 compute-1 python3.9[88623]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:35:18 compute-1 sudo[88624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:35:18 compute-1 sudo[88624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:18 compute-1 sudo[88624]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:19.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:19 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:19 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:19 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:19 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:35:19 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:19 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:19 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:35:19 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:35:19 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:35:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:19.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:35:19 compute-1 sudo[88861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-segwyatdapiexpwxoouunxtdsynbzkcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063319.5349162-57-235175223928350/AnsiballZ_command.py'
Nov 25 09:35:19 compute-1 sudo[88861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:20 compute-1 python3.9[88863]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:35:20 compute-1 ceph-mon[79643]: pgmap v21: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:35:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:21.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:21 compute-1 sudo[88877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:35:21 compute-1 sudo[88877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:21 compute-1 sudo[88877]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:21.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:22 compute-1 ceph-mon[79643]: pgmap v22: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:35:22 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:22 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:22 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:22 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.647109) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322647131, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1755, "num_deletes": 251, "total_data_size": 6620901, "memory_usage": 6937856, "flush_reason": "Manual Compaction"}
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322655932, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 4023900, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 5440, "largest_seqno": 7190, "table_properties": {"data_size": 4016908, "index_size": 3742, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17514, "raw_average_key_size": 20, "raw_value_size": 4001389, "raw_average_value_size": 4690, "num_data_blocks": 170, "num_entries": 853, "num_filter_entries": 853, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063277, "oldest_key_time": 1764063277, "file_creation_time": 1764063322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 8846 microseconds, and 6013 cpu microseconds.
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.655956) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 4023900 bytes OK
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.655968) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.656288) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.656297) EVENT_LOG_v1 {"time_micros": 1764063322656295, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.656307) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 6612205, prev total WAL file size 6612205, number of live WAL files 2.
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.657165) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3929KB)], [15(9814KB)]
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322657208, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 14074260, "oldest_snapshot_seqno": -1}
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 2634 keys, 12688368 bytes, temperature: kUnknown
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322684574, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 12688368, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12666752, "index_size": 13955, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6597, "raw_key_size": 66583, "raw_average_key_size": 25, "raw_value_size": 12613941, "raw_average_value_size": 4788, "num_data_blocks": 618, "num_entries": 2634, "num_filter_entries": 2634, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764063322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.684726) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12688368 bytes
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.685127) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 513.5 rd, 462.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 9.6 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(6.7) write-amplify(3.2) OK, records in: 3172, records dropped: 538 output_compression: NoCompression
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.685143) EVENT_LOG_v1 {"time_micros": 1764063322685136, "job": 6, "event": "compaction_finished", "compaction_time_micros": 27409, "compaction_time_cpu_micros": 15896, "output_level": 6, "num_output_files": 1, "total_output_size": 12688368, "num_input_records": 3172, "num_output_records": 2634, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322685762, "job": 6, "event": "table_file_deletion", "file_number": 17}
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322686767, "job": 6, "event": "table_file_deletion", "file_number": 15}
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.657110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.686787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.686790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.686791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.686792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:35:22 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:35:22.686793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:35:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:23 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:35:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:23 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:35:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:23.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:23 compute-1 ceph-mon[79643]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Nov 25 09:35:23 compute-1 ceph-mon[79643]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Nov 25 09:35:23 compute-1 ceph-mon[79643]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Nov 25 09:35:23 compute-1 ceph-mon[79643]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Nov 25 09:35:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:23.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:24 compute-1 ceph-mon[79643]: pgmap v23: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:35:24 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:24 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:24 compute-1 sudo[88915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:24 compute-1 sudo[88915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:24 compute-1 sudo[88915]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:25 compute-1 sudo[88940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:35:25 compute-1 sudo[88940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:25.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:25 compute-1 systemd[1]: Stopping Ceph node-exporter.compute-1 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:35:25 compute-1 podman[89012]: 2025-11-25 09:35:25.346708765 +0000 UTC m=+0.038561784 container died 2a1e927df99a4f6883dd678d1b8ad8ebba4024ed0429ba56ea266629e5f2b0b3 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-218e690cf1be2d09d985eece31397d812ffc2b57e35c9d3c89e935cd76f00418-merged.mount: Deactivated successfully.
Nov 25 09:35:25 compute-1 podman[89012]: 2025-11-25 09:35:25.364852984 +0000 UTC m=+0.056705993 container remove 2a1e927df99a4f6883dd678d1b8ad8ebba4024ed0429ba56ea266629e5f2b0b3 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:25 compute-1 bash[89012]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1
Nov 25 09:35:25 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@node-exporter.compute-1.service: Main process exited, code=exited, status=143/n/a
Nov 25 09:35:25 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@node-exporter.compute-1.service: Failed with result 'exit-code'.
Nov 25 09:35:25 compute-1 systemd[1]: Stopped Ceph node-exporter.compute-1 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:35:25 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@node-exporter.compute-1.service: Consumed 1.672s CPU time.
Nov 25 09:35:25 compute-1 systemd[1]: Starting Ceph node-exporter.compute-1 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:35:25 compute-1 podman[89090]: 2025-11-25 09:35:25.58392343 +0000 UTC m=+0.024925924 container create 48c3be01eb68c77d87f12f950cadd5a9f0be42049d86ff37bececa6f3d988615 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2899c17558b0a01dbe9d1fc9ecb99c08b855cfb976554392a2f350dffc1b3ea/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:25 compute-1 podman[89090]: 2025-11-25 09:35:25.619529022 +0000 UTC m=+0.060531535 container init 48c3be01eb68c77d87f12f950cadd5a9f0be42049d86ff37bececa6f3d988615 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:25 compute-1 podman[89090]: 2025-11-25 09:35:25.622934911 +0000 UTC m=+0.063937405 container start 48c3be01eb68c77d87f12f950cadd5a9f0be42049d86ff37bececa6f3d988615 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:25 compute-1 bash[89090]: 48c3be01eb68c77d87f12f950cadd5a9f0be42049d86ff37bececa6f3d988615
Nov 25 09:35:25 compute-1 podman[89090]: 2025-11-25 09:35:25.572861441 +0000 UTC m=+0.013863954 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.626Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.626Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Nov 25 09:35:25 compute-1 systemd[1]: Started Ceph node-exporter.compute-1 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.628Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.628Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.628Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=arp
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=bcache
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=bonding
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=cpu
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=dmi
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=edac
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=entropy
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=filefd
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=hwmon
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=netclass
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=netdev
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=netstat
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=nfs
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=nvme
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=os
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=pressure
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=rapl
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=selinux
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=softnet
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=stat
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=textfile
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=thermal_zone
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=time
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=uname
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=xfs
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=node_exporter.go:117 level=info collector=zfs
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Nov 25 09:35:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1[89101]: ts=2025-11-25T09:35:25.629Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Nov 25 09:35:25 compute-1 sudo[88940]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:25 compute-1 ceph-mon[79643]: Reconfiguring grafana.compute-0 (dependencies changed)...
Nov 25 09:35:25 compute-1 ceph-mon[79643]: Reconfiguring daemon grafana.compute-0 on compute-0
Nov 25 09:35:25 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:25 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:25 compute-1 sudo[89112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:25 compute-1 sudo[89112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:25 compute-1 sudo[89112]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:25.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:25 compute-1 sudo[89137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:35:25 compute-1 sudo[89137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:25 compute-1 sudo[88861]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:26 compute-1 podman[89242]: 2025-11-25 09:35:26.187663803 +0000 UTC m=+0.035138411 container exec 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:35:26 compute-1 podman[89242]: 2025-11-25 09:35:26.268615485 +0000 UTC m=+0.116090084 container exec_died 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:35:26 compute-1 sshd-session[88473]: Connection closed by 192.168.122.30 port 35282
Nov 25 09:35:26 compute-1 sshd-session[88470]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:35:26 compute-1 systemd[1]: session-35.scope: Deactivated successfully.
Nov 25 09:35:26 compute-1 systemd[1]: session-35.scope: Consumed 6.379s CPU time.
Nov 25 09:35:26 compute-1 systemd-logind[746]: Session 35 logged out. Waiting for processes to exit.
Nov 25 09:35:26 compute-1 systemd-logind[746]: Removed session 35.
Nov 25 09:35:26 compute-1 podman[89351]: 2025-11-25 09:35:26.589810011 +0000 UTC m=+0.034581613 container exec 48c3be01eb68c77d87f12f950cadd5a9f0be42049d86ff37bececa6f3d988615 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:26 compute-1 podman[89351]: 2025-11-25 09:35:26.598559562 +0000 UTC m=+0.043331144 container exec_died 48c3be01eb68c77d87f12f950cadd5a9f0be42049d86ff37bececa6f3d988615 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:26 compute-1 ceph-mon[79643]: Reconfiguring node-exporter.compute-1 (unknown last config time)...
Nov 25 09:35:26 compute-1 ceph-mon[79643]: Reconfiguring daemon node-exporter.compute-1 on compute-1
Nov 25 09:35:26 compute-1 ceph-mon[79643]: pgmap v24: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:35:26 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:26 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:26 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 25 09:35:26 compute-1 ceph-mon[79643]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 25 09:35:26 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 25 09:35:26 compute-1 ceph-mon[79643]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 25 09:35:26 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 25 09:35:26 compute-1 ceph-mon[79643]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 25 09:35:26 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:26 compute-1 podman[89410]: 2025-11-25 09:35:26.747786063 +0000 UTC m=+0.032100336 container exec 97b4060ec7cbc126fd630094c624ea203d3cf63dbbedc2dd06eeb5cecf3a3665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:35:26 compute-1 podman[89410]: 2025-11-25 09:35:26.757579382 +0000 UTC m=+0.041893654 container exec_died 97b4060ec7cbc126fd630094c624ea203d3cf63dbbedc2dd06eeb5cecf3a3665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:35:26 compute-1 podman[89460]: 2025-11-25 09:35:26.884143807 +0000 UTC m=+0.031485236 container exec 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 09:35:26 compute-1 podman[89460]: 2025-11-25 09:35:26.894644931 +0000 UTC m=+0.041986360 container exec_died 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 09:35:27 compute-1 podman[89512]: 2025-11-25 09:35:27.0212698 +0000 UTC m=+0.031736620 container exec 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, release=1793, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, distribution-scope=public, version=2.2.4, vcs-type=git)
Nov 25 09:35:27 compute-1 podman[89512]: 2025-11-25 09:35:27.033586996 +0000 UTC m=+0.044053826 container exec_died 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, version=2.2.4, architecture=x86_64, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, vendor=Red Hat, Inc., com.redhat.component=keepalived-container)
Nov 25 09:35:27 compute-1 sudo[89137]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:27.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:27.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-1 ceph-mon[79643]: pgmap v25: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:35:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:35:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:35:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:35:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:35:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:29.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:29.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:30 compute-1 ceph-mon[79643]: pgmap v26: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:35:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:35:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:30 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:30 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:31 compute-1 sudo[89557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:35:31 compute-1 sudo[89557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:31 compute-1 sudo[89557]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:31.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:31 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da80034a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:31.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:31 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:31 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:31 compute-1 ceph-mon[79643]: pgmap v27: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:35:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:32 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da80034a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/093532 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:35:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:32 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da80034a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:33.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:33 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:33.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:34 compute-1 ceph-mon[79643]: pgmap v28: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:35:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:34 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:34 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da80034a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:35.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:35 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da80034a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:35.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:36 compute-1 ceph-mon[79643]: pgmap v29: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:35:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:36 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:36 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:37.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:37 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da80033e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:37.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:38 compute-1 ceph-mon[79643]: pgmap v30: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:35:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:38 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:38 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:38 compute-1 sudo[89586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:35:38 compute-1 sudo[89586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:38 compute-1 sudo[89586]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:39.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:39 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da8003440 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:39.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:40 compute-1 ceph-mon[79643]: pgmap v31: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:35:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:40 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:40 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:41.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:41 compute-1 sshd-session[89613]: Accepted publickey for zuul from 192.168.122.30 port 44214 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:35:41 compute-1 systemd-logind[746]: New session 36 of user zuul.
Nov 25 09:35:41 compute-1 systemd[1]: Started Session 36 of User zuul.
Nov 25 09:35:41 compute-1 sshd-session[89613]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:35:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:41 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:41.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:41 compute-1 python3.9[89766]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 09:35:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:42 compute-1 ceph-mon[79643]: pgmap v32: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:35:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:42 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:42 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da8005560 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:42 compute-1 python3.9[89940]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:35:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:43.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:43 compute-1 sudo[90095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsfrbwserblyfvrddwyumpjbvhxerupw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063343.2527688-94-5947919398393/AnsiballZ_command.py'
Nov 25 09:35:43 compute-1 sudo[90095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:43 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:43 compute-1 python3.9[90097]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:35:43 compute-1 sudo[90095]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:43.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:44 compute-1 ceph-mon[79643]: pgmap v33: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:35:44 compute-1 sudo[90248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upnjkzgaqlsdfqfslmdgprvdhnddctgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063344.0442097-130-38241622519342/AnsiballZ_stat.py'
Nov 25 09:35:44 compute-1 sudo[90248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:44 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:44 compute-1 python3.9[90250]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:35:44 compute-1 sudo[90248]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:44 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:45 compute-1 sudo[90402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyewrdabsxqjqujlphytofsyxyelrejf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063344.7962055-163-266362949421231/AnsiballZ_file.py'
Nov 25 09:35:45 compute-1 sudo[90402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:45.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:45 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Nov 25 09:35:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:35:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:45 compute-1 python3.9[90404]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:35:45 compute-1 sudo[90402]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:45 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da8005560 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:45 compute-1 sudo[90555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alxguycjqexfczxmaogbryrejwxxmwdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063345.4591513-190-239728564591122/AnsiballZ_file.py'
Nov 25 09:35:45 compute-1 sudo[90555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:45 compute-1 python3.9[90557]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:35:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:45.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:45 compute-1 sudo[90555]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:46 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Nov 25 09:35:46 compute-1 ceph-mon[79643]: pgmap v34: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:35:46 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:46 compute-1 ceph-mon[79643]: osdmap e43: 3 total, 3 up, 3 in
Nov 25 09:35:46 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:46 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:46 compute-1 ceph-mon[79643]: osdmap e44: 3 total, 3 up, 3 in
Nov 25 09:35:46 compute-1 python3.9[90707]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:35:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:46 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:46 compute-1 network[90724]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:35:46 compute-1 network[90725]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:35:46 compute-1 network[90726]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:35:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:46 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:47.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Nov 25 09:35:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:47 compute-1 ceph-mon[79643]: osdmap e45: 3 total, 3 up, 3 in
Nov 25 09:35:47 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 45 pg[2.0( empty local-lis/les=12/14 n=0 ec=12/12 lis/c=12/12 les/c/f=14/14/0 sis=45 pruub=15.455216408s) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active pruub 213.880462646s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:35:47 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 45 pg[2.0( empty local-lis/les=12/14 n=0 ec=12/12 lis/c=12/12 les/c/f=14/14/0 sis=45 pruub=15.455216408s) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown pruub 213.880462646s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:47 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:47.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:48 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.1b( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.1a( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.7( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.6( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.4( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.1( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.e( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.5( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.3( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.9( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.a( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.c( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.d( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.10( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.12( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.1e( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.14( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.15( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.17( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-mon[79643]: pgmap v37: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Nov 25 09:35:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:48 compute-1 ceph-mon[79643]: osdmap e46: 3 total, 3 up, 3 in
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=12/14 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.1b( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.1a( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.7( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.4( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.1( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.e( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.5( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.0( empty local-lis/les=45/46 n=0 ec=12/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.3( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.9( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.a( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.6( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.c( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.d( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.12( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.10( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.1e( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.15( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.14( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 46 pg[2.17( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=12/12 les/c/f=14/14/0 sis=45) [0] r=0 lpr=45 pi=[12,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:48 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:48 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:48 compute-1 python3.9[90987]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:35:48 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 25 09:35:48 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 25 09:35:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:49.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:49 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Nov 25 09:35:49 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 25 09:35:49 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:49 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:49 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 25 09:35:49 compute-1 python3.9[91138]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:35:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:49 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da8005700 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:49.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:49 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 25 09:35:49 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 25 09:35:50 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Nov 25 09:35:50 compute-1 ceph-mon[79643]: 3.1d scrub starts
Nov 25 09:35:50 compute-1 ceph-mon[79643]: 3.1d scrub ok
Nov 25 09:35:50 compute-1 ceph-mon[79643]: 2.1f scrub starts
Nov 25 09:35:50 compute-1 ceph-mon[79643]: 2.1f scrub ok
Nov 25 09:35:50 compute-1 ceph-mon[79643]: pgmap v40: 74 pgs: 62 unknown, 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:35:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:50 compute-1 ceph-mon[79643]: osdmap e47: 3 total, 3 up, 3 in
Nov 25 09:35:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:50 compute-1 ceph-mon[79643]: osdmap e48: 3 total, 3 up, 3 in
Nov 25 09:35:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:50 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da8005700 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:50 compute-1 python3.9[91292]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:35:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:50 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da8005700 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:50 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 25 09:35:50 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 25 09:35:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:51.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:51 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Nov 25 09:35:51 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 49 pg[7.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=49 pruub=8.549090385s) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active pruub 210.926330566s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:35:51 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 49 pg[7.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=49 pruub=8.549090385s) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown pruub 210.926330566s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:51 compute-1 ceph-mon[79643]: 3.a deep-scrub starts
Nov 25 09:35:51 compute-1 ceph-mon[79643]: 3.a deep-scrub ok
Nov 25 09:35:51 compute-1 ceph-mon[79643]: 2.1d scrub starts
Nov 25 09:35:51 compute-1 ceph-mon[79643]: 2.1d scrub ok
Nov 25 09:35:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 25 09:35:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:51 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 25 09:35:51 compute-1 ceph-mon[79643]: osdmap e49: 3 total, 3 up, 3 in
Nov 25 09:35:51 compute-1 sudo[91449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnzbrxzvhrzjitifpvcmprvufcxlxljn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063351.083671-334-180991979132444/AnsiballZ_setup.py'
Nov 25 09:35:51 compute-1 sudo[91449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:51 compute-1 python3.9[91451]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:35:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:51 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:51 compute-1 sudo[91449]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:51.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:52 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 25 09:35:52 compute-1 sudo[91533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqzhuydetosnupphvzgxrlhlcaearwmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063351.083671-334-180991979132444/AnsiballZ_dnf.py'
Nov 25 09:35:52 compute-1 sudo[91533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:52 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 25 09:35:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:52 compute-1 python3.9[91535]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:35:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.1d( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.12( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.10( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.11( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.16( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.17( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.14( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.15( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.8( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.e( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.c( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.5( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.d( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.b( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.7( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.1( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.2( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.1c( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.1e( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.19( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.12( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.1d( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.10( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.1a( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.11( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.16( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.17( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.14( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.15( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.8( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.e( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.a( empty local-lis/les=18/19 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.c( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.5( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.d( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.b( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.0( empty local-lis/les=49/50 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.7( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.2( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.1c( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.1e( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.19( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.1( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-mon[79643]: 3.7 scrub starts
Nov 25 09:35:52 compute-1 ceph-mon[79643]: 3.7 scrub ok
Nov 25 09:35:52 compute-1 ceph-mon[79643]: 2.1b scrub starts
Nov 25 09:35:52 compute-1 ceph-mon[79643]: 2.1b scrub ok
Nov 25 09:35:52 compute-1 ceph-mon[79643]: pgmap v43: 136 pgs: 31 unknown, 105 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 09:35:52 compute-1 ceph-mon[79643]: 4.1e scrub starts
Nov 25 09:35:52 compute-1 ceph-mon[79643]: 4.1e scrub ok
Nov 25 09:35:52 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:52 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:52 compute-1 ceph-mon[79643]: osdmap e50: 3 total, 3 up, 3 in
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.1a( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 50 pg[7.a( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=18/18 les/c/f=19/19/0 sis=49) [0] r=0 lpr=49 pi=[18,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:52 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:52 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004350 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:53 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Nov 25 09:35:53 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Nov 25 09:35:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:53.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Nov 25 09:35:53 compute-1 ceph-mon[79643]: 3.8 scrub starts
Nov 25 09:35:53 compute-1 ceph-mon[79643]: 3.8 scrub ok
Nov 25 09:35:53 compute-1 ceph-mon[79643]: 2.1a scrub starts
Nov 25 09:35:53 compute-1 ceph-mon[79643]: 2.1a scrub ok
Nov 25 09:35:53 compute-1 ceph-mon[79643]: 4.1c deep-scrub starts
Nov 25 09:35:53 compute-1 ceph-mon[79643]: 4.1c deep-scrub ok
Nov 25 09:35:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:53 compute-1 ceph-mon[79643]: osdmap e51: 3 total, 3 up, 3 in
Nov 25 09:35:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:53 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da80058a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:53.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:54 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 25 09:35:54 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 25 09:35:54 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Nov 25 09:35:54 compute-1 ceph-mon[79643]: 3.b scrub starts
Nov 25 09:35:54 compute-1 ceph-mon[79643]: 3.b scrub ok
Nov 25 09:35:54 compute-1 ceph-mon[79643]: 2.7 deep-scrub starts
Nov 25 09:35:54 compute-1 ceph-mon[79643]: 2.7 deep-scrub ok
Nov 25 09:35:54 compute-1 ceph-mon[79643]: pgmap v46: 182 pgs: 77 unknown, 105 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 09:35:54 compute-1 ceph-mon[79643]: 4.1b deep-scrub starts
Nov 25 09:35:54 compute-1 ceph-mon[79643]: 4.1b deep-scrub ok
Nov 25 09:35:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:54 compute-1 ceph-mon[79643]: osdmap e52: 3 total, 3 up, 3 in
Nov 25 09:35:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:54 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:54 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:54 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.1 deep-scrub starts
Nov 25 09:35:55 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.1 deep-scrub ok
Nov 25 09:35:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:55.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:55 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Nov 25 09:35:55 compute-1 ceph-mon[79643]: 3.4 scrub starts
Nov 25 09:35:55 compute-1 ceph-mon[79643]: 3.4 scrub ok
Nov 25 09:35:55 compute-1 ceph-mon[79643]: 2.19 scrub starts
Nov 25 09:35:55 compute-1 ceph-mon[79643]: 2.19 scrub ok
Nov 25 09:35:55 compute-1 ceph-mon[79643]: 4.1d scrub starts
Nov 25 09:35:55 compute-1 ceph-mon[79643]: 4.1d scrub ok
Nov 25 09:35:55 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:55 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:55 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:55 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:55 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:55 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:55 compute-1 ceph-mon[79643]: osdmap e53: 3 total, 3 up, 3 in
Nov 25 09:35:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:55 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004350 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:55.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:55 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 25 09:35:55 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 53 pg[10.0( v 42'96 (0'0,42'96] local-lis/les=31/32 n=8 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=53 pruub=10.958458900s) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 42'95 mlcod 42'95 active pruub 218.102203369s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 53 pg[10.0( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=53 pruub=10.958458900s) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 42'95 mlcod 0'0 unknown pruub 218.102203369s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.10( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.1e( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.1d( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.1c( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.1f( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.1b( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.1a( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.18( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.19( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.7( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.5( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.4( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.2( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.1( v 42'96 (0'0,42'96] local-lis/les=31/32 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.b( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.8( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.3( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.d( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.6( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.9( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.c( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.e( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.f( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.11( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.12( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.13( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.14( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.15( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.16( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.17( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.a( v 42'96 lc 0'0 (0'0,42'96] local-lis/les=31/32 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.10( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.1d( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.1b( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.1e( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.1a( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.1c( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.7( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.4( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.19( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.1f( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.1( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.b( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.2( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.3( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.8( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.0( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 42'95 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.6( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.c( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.9( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.e( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.f( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.d( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.12( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.11( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.13( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.18( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.15( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.14( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.16( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.17( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.5( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 54 pg[10.a( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=31/31 les/c/f=32/32/0 sis=53) [0] r=0 lpr=53 pi=[31,53)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-1 ceph-mon[79643]: 3.5 scrub starts
Nov 25 09:35:56 compute-1 ceph-mon[79643]: 3.5 scrub ok
Nov 25 09:35:56 compute-1 ceph-mon[79643]: 2.1 deep-scrub starts
Nov 25 09:35:56 compute-1 ceph-mon[79643]: 2.1 deep-scrub ok
Nov 25 09:35:56 compute-1 ceph-mon[79643]: pgmap v49: 244 pgs: 139 unknown, 105 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:35:56 compute-1 ceph-mon[79643]: 4.1a scrub starts
Nov 25 09:35:56 compute-1 ceph-mon[79643]: 4.1a scrub ok
Nov 25 09:35:56 compute-1 ceph-mon[79643]: osdmap e54: 3 total, 3 up, 3 in
Nov 25 09:35:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:56 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da80058a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:56 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:56 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 25 09:35:56 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 25 09:35:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:57.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:57 compute-1 ceph-mon[79643]: 3.1e scrub starts
Nov 25 09:35:57 compute-1 ceph-mon[79643]: 3.1e scrub ok
Nov 25 09:35:57 compute-1 ceph-mon[79643]: 2.2 scrub starts
Nov 25 09:35:57 compute-1 ceph-mon[79643]: 2.2 scrub ok
Nov 25 09:35:57 compute-1 ceph-mon[79643]: 4.19 scrub starts
Nov 25 09:35:57 compute-1 ceph-mon[79643]: 4.19 scrub ok
Nov 25 09:35:57 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Nov 25 09:35:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:57 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:57.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:57 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 25 09:35:57 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 25 09:35:58 compute-1 ceph-mon[79643]: 3.1f scrub starts
Nov 25 09:35:58 compute-1 ceph-mon[79643]: 3.1f scrub ok
Nov 25 09:35:58 compute-1 ceph-mon[79643]: 2.8 scrub starts
Nov 25 09:35:58 compute-1 ceph-mon[79643]: 2.8 scrub ok
Nov 25 09:35:58 compute-1 ceph-mon[79643]: pgmap v52: 306 pgs: 1 peering, 62 unknown, 243 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:35:58 compute-1 ceph-mon[79643]: 4.6 scrub starts
Nov 25 09:35:58 compute-1 ceph-mon[79643]: 4.6 scrub ok
Nov 25 09:35:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:58 compute-1 ceph-mon[79643]: osdmap e55: 3 total, 3 up, 3 in
Nov 25 09:35:58 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 55 pg[12.0( v 42'64 (0'0,42'64] local-lis/les=39/40 n=5 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=55 pruub=13.808701515s) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 42'63 mlcod 42'63 active pruub 223.357406616s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:35:58 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 55 pg[12.0( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=55 pruub=13.808701515s) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 42'63 mlcod 0'0 unknown pruub 223.357406616s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:58 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0).collection(12.0_head 0x5584d1f25440) operator()   moving buffer(0x5584d2890988 space 0x5584d2687d50 0x0~1000 clean)
Nov 25 09:35:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:58 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004350 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:58 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da80058a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:58 compute-1 sudo[91591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:35:58 compute-1 sudo[91591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:58 compute-1 sudo[91591]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:58 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Nov 25 09:35:58 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Nov 25 09:35:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:59.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:59 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.10( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.13( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.11( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.12( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.15( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.17( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.14( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.9( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.8( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.a( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.c( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.f( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.6( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.b( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.e( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.d( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.4( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.7( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.2( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.5( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.1e( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.1( v 42'64 (0'0,42'64] local-lis/les=39/40 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.1f( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.3( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.1d( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.1a( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.1b( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.1c( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.18( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.19( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.16( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.10( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.11( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.12( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.13( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.14( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.9( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.15( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.17( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.8( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.f( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.0( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 42'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.c( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.b( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.6( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.e( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.d( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.7( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.4( v 42'64 (0'0,42'64] local-lis/les=55/56 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.2( v 42'64 (0'0,42'64] local-lis/les=55/56 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.5( v 42'64 (0'0,42'64] local-lis/les=55/56 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.1e( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.1f( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.1( v 42'64 (0'0,42'64] local-lis/les=55/56 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.3( v 42'64 (0'0,42'64] local-lis/les=55/56 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.1d( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.19( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.1c( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.1b( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.a( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.1a( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.18( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 56 pg[12.16( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [0] r=0 lpr=55 pi=[39,55)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:59 compute-1 ceph-mon[79643]: 3.3 scrub starts
Nov 25 09:35:59 compute-1 ceph-mon[79643]: 3.3 scrub ok
Nov 25 09:35:59 compute-1 ceph-mon[79643]: 2.5 scrub starts
Nov 25 09:35:59 compute-1 ceph-mon[79643]: 2.5 scrub ok
Nov 25 09:35:59 compute-1 ceph-mon[79643]: 4.5 deep-scrub starts
Nov 25 09:35:59 compute-1 ceph-mon[79643]: 4.5 deep-scrub ok
Nov 25 09:35:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:35:59 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da80058a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:35:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:35:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:59.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:35:59 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.3 deep-scrub starts
Nov 25 09:35:59 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.3 deep-scrub ok
Nov 25 09:36:00 compute-1 ceph-mon[79643]: 3.2 scrub starts
Nov 25 09:36:00 compute-1 ceph-mon[79643]: 3.2 scrub ok
Nov 25 09:36:00 compute-1 ceph-mon[79643]: 2.0 scrub starts
Nov 25 09:36:00 compute-1 ceph-mon[79643]: 2.0 scrub ok
Nov 25 09:36:00 compute-1 ceph-mon[79643]: pgmap v54: 337 pgs: 1 peering, 93 unknown, 243 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:36:00 compute-1 ceph-mon[79643]: 4.4 scrub starts
Nov 25 09:36:00 compute-1 ceph-mon[79643]: 4.4 scrub ok
Nov 25 09:36:00 compute-1 ceph-mon[79643]: osdmap e56: 3 total, 3 up, 3 in
Nov 25 09:36:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:36:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:00 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da80058a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:00 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004350 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:00 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 25 09:36:00 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 25 09:36:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:01.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:01 compute-1 ceph-mon[79643]: 3.c scrub starts
Nov 25 09:36:01 compute-1 ceph-mon[79643]: 3.c scrub ok
Nov 25 09:36:01 compute-1 ceph-mon[79643]: 2.3 deep-scrub starts
Nov 25 09:36:01 compute-1 ceph-mon[79643]: 2.3 deep-scrub ok
Nov 25 09:36:01 compute-1 ceph-mon[79643]: 4.3 scrub starts
Nov 25 09:36:01 compute-1 ceph-mon[79643]: 4.3 scrub ok
Nov 25 09:36:01 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 09:36:01 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 09:36:01 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-1 ceph-mon[79643]: 4.1 deep-scrub starts
Nov 25 09:36:01 compute-1 ceph-mon[79643]: 4.1 deep-scrub ok
Nov 25 09:36:01 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.1f( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924823761s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.370086670s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.1f( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924795151s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.370086670s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.11( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.983586311s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.428894043s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.11( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.983563423s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.428894043s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.1e( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.925950050s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.371353149s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.1e( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.925935745s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.371353149s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.1b( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.937954903s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.383407593s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.1b( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.937944412s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.383407593s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.10( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982782364s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 42'64 mlcod 42'64 active pruub 226.428253174s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.1d( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924590111s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.370101929s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.10( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982761383s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 42'64 mlcod 0'0 unknown NOTIFY pruub 226.428253174s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.1d( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924579620s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.370101929s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.18( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.939458847s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.385009766s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.18( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.939449310s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.385009766s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.15( v 54'99 (0'0,54'99] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.954924583s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=54'97 lcod 54'98 mlcod 54'98 active pruub 223.400512695s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.15( v 54'99 (0'0,54'99] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.954905510s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=54'97 lcod 54'98 mlcod 0'0 unknown NOTIFY pruub 223.400512695s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.13( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.983297348s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.428939819s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.13( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.983286858s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.428939819s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.1c( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.926576614s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.372238159s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.1c( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.926567078s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.372238159s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.14( v 54'99 (0'0,54'99] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.954810143s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=54'97 lcod 54'98 mlcod 54'98 active pruub 223.400512695s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.12( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.983169556s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.428894043s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.14( v 54'99 (0'0,54'99] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.954792023s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=54'97 lcod 54'98 mlcod 0'0 unknown NOTIFY pruub 223.400512695s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.12( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.983160019s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.428894043s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.1b( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924552917s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.370300293s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.1b( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924544334s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.370300293s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.1e( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.936464310s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.382278442s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.1e( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.936454773s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.382278442s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.13( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.954640388s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active pruub 223.400482178s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.13( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.954629898s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.400482178s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.1f( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.936402321s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.382278442s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.12( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.954543114s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active pruub 223.400421143s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.1f( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.936392784s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.382278442s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.12( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.954532623s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.400421143s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.19( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924383163s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.370315552s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.19( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924373627s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.370315552s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.11( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.954473495s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active pruub 223.400436401s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.11( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.954464912s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.400436401s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.17( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.983070374s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.429061890s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.2( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.936274529s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.382278442s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.2( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.936265945s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.382278442s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.f( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.953560829s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active pruub 223.399627686s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.9( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982915878s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.428970337s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.3( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.935658455s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.381744385s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.f( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.953551292s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.399627686s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.9( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982906342s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.428970337s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.3( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.935648918s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.381744385s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.6( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924505234s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.370651245s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.6( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924496651s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.370651245s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.8( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982897758s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.429077148s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.8( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982888222s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.429077148s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.4( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924447060s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.370635986s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.4( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924425125s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.370635986s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.4( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.935434341s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.381744385s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.c( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982789993s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.429107666s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.4( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.935424805s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.381744385s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.c( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982780457s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.429107666s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.1( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924280167s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.370651245s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.1( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924270630s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.370651245s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.b( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.935253143s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.381683350s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.b( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.935243607s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.381683350s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.e( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924165726s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.370681763s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.e( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924157143s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.370681763s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.6( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982563972s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.429122925s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.6( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982555389s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.429122925s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.5( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924093246s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.370681763s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.5( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924084663s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.370681763s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.b( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982428551s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.429107666s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.5( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.935015678s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.381713867s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.b( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982419014s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.429107666s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.5( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.935006142s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.381713867s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.8( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.952704430s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active pruub 223.399475098s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.e( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982332230s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.429122925s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.8( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.952693939s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.399475098s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.e( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982321739s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.429122925s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.6( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.934807777s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.381652832s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.6( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.934799194s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.381652832s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.9( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923849106s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.370773315s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.1( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.951938629s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active pruub 223.398864746s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.9( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923838615s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.370773315s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.1( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.951930046s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.398864746s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.7( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982141495s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.429153442s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.f( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.933849335s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.380874634s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.7( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982131958s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.429153442s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.f( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.933840752s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.380874634s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.a( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923682213s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.370773315s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.2( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.951801300s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active pruub 223.398910522s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.a( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923674583s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.370773315s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.2( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.951790810s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.398910522s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.4( v 42'64 (0'0,42'64] local-lis/les=55/56 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.981972694s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.429138184s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.e( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.933695793s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.380874634s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.4( v 42'64 (0'0,42'64] local-lis/les=55/56 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.981964111s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.429138184s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.e( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.933686256s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.380874634s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.b( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924025536s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.371261597s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.3( v 54'99 (0'0,54'99] local-lis/les=53/54 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.952005386s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=54'97 lcod 54'98 mlcod 54'98 active pruub 223.399261475s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.b( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.924017906s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.371261597s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.3( v 54'99 (0'0,54'99] local-lis/les=53/54 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.951986313s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=54'97 lcod 54'98 mlcod 0'0 unknown NOTIFY pruub 223.399261475s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.9( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.933524132s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.380874634s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.c( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923911095s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.371261597s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.9( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.933494568s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.380874634s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.c( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923898697s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.371261597s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.4( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.951366425s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active pruub 223.398818970s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.2( v 42'64 (0'0,42'64] local-lis/les=55/56 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982749939s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.430206299s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.4( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.951356888s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.398818970s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.2( v 42'64 (0'0,42'64] local-lis/les=55/56 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982739449s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.430206299s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.8( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.933321953s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.380874634s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.d( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923734665s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.371276855s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.a( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982666969s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.430206299s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.8( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.933312416s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.380874634s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.d( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923726082s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.371276855s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.a( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982652664s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.430206299s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.5( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.953249931s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active pruub 223.400909424s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.3( v 42'64 (0'0,42'64] local-lis/les=55/56 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982639313s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.430297852s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.5( v 42'96 (0'0,42'96] local-lis/les=53/54 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.953239441s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.400909424s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.3( v 42'64 (0'0,42'64] local-lis/les=55/56 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982627869s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.430297852s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.a( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.937431335s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.385177612s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.a( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.937422752s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.385177612s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.f( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923449516s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.371276855s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.f( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923438072s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.371276855s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.1e( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982372284s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.430236816s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.1e( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982362747s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.430236816s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.10( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923428535s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.371353149s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.10( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923420906s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.371353149s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.18( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.952538490s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active pruub 223.400497437s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.18( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.952529907s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.400497437s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.14( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.932808876s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.380828857s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.14( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.932801247s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.380828857s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.19( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.950802803s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active pruub 223.398834229s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.19( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.950795174s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.398834229s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.12( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923254013s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.371353149s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.12( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923244476s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.371353149s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.1c( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982209206s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.430358887s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.1c( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982192993s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.430358887s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.16( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.932636261s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.380798340s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.16( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.932628632s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.380798340s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.13( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923129082s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.371368408s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.1b( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.950489044s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active pruub 223.398727417s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.13( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923120499s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.371368408s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.1b( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.950480461s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.398727417s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.1d( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982017517s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.430313110s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.11( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.932424545s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.380737305s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.1d( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982007980s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.430313110s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.11( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.932416916s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.380737305s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.15( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923811913s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.372238159s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.1a( v 56'67 (0'0,56'67] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.982011795s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=56'65 lcod 56'66 mlcod 56'66 active pruub 226.430435181s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.15( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923801422s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.372238159s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.1a( v 56'67 (0'0,56'67] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.981992722s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=56'65 lcod 56'66 mlcod 0'0 unknown NOTIFY pruub 226.430435181s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.13( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.932223320s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.380722046s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.13( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.932214737s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.380722046s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.1e( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.950213432s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active pruub 223.398742676s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.1e( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.950204849s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.398742676s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.18( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.981875420s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.430465698s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.18( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.981847763s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.430465698s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.19( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.981669426s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active pruub 226.430343628s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.19( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.981657982s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.430343628s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.18( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923534393s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 223.372238159s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[2.18( empty local-lis/les=45/46 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.923525810s) [2] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 223.372238159s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.1d( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.932002068s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.380752563s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.1d( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.931993484s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.380752563s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.10( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.949876785s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active pruub 223.398651123s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.10( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.931944847s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 227.380737305s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[10.10( v 42'96 (0'0,42'96] local-lis/les=53/54 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.949867249s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.398651123s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[7.10( empty local-lis/les=49/50 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.931928635s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.380737305s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[12.17( v 42'64 (0'0,42'64] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=13.980209351s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.429061890s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[8.12( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[11.12( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[8.10( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[4.1b( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[8.17( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[4.1a( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[8.4( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[11.7( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[4.5( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[5.1c( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[5.1f( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[5.11( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[3.16( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[5.10( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[4.d( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[3.14( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[4.c( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[3.13( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[5.15( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[3.10( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[5.16( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[3.f( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[11.5( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[5.9( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[3.d( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[3.c( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[5.7( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[3.3( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[5.2( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[8.8( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[3.5( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[4.e( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[5.1( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[5.f( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[11.1e( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[3.a( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[11.1d( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[3.1c( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[5.1b( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[11.1c( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[5.18( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[4.a( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[11.1b( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[8.18( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[8.1b( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[4.18( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[8.14( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[8.19( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[11.1a( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 57 pg[4.13( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-1 anacron[4572]: Job `cron.weekly' started
Nov 25 09:36:01 compute-1 anacron[4572]: Job `cron.weekly' terminated
Nov 25 09:36:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:01 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:01.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:02 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.17 deep-scrub starts
Nov 25 09:36:02 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.17 deep-scrub ok
Nov 25 09:36:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:02 compute-1 ceph-mon[79643]: 3.d scrub starts
Nov 25 09:36:02 compute-1 ceph-mon[79643]: 3.d scrub ok
Nov 25 09:36:02 compute-1 ceph-mon[79643]: 2.9 scrub starts
Nov 25 09:36:02 compute-1 ceph-mon[79643]: 2.9 scrub ok
Nov 25 09:36:02 compute-1 ceph-mon[79643]: pgmap v56: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 523 B/s rd, 0 op/s
Nov 25 09:36:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 09:36:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 09:36:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-1 ceph-mon[79643]: osdmap e57: 3 total, 3 up, 3 in
Nov 25 09:36:02 compute-1 ceph-mon[79643]: 11.10 deep-scrub starts
Nov 25 09:36:02 compute-1 ceph-mon[79643]: 11.10 deep-scrub ok
Nov 25 09:36:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[5.18( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[8.14( v 28'12 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[4.18( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[11.14( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[4.1b( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[8.17( v 28'12 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[8.12( v 28'12 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[5.1b( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[4.1a( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[11.12( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[5.1c( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[3.1c( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[8.10( v 28'12 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[5.1( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[11.f( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[8.8( v 28'12 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[3.3( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[3.f( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[5.9( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[3.5( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[11.7( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[8.4( v 28'12 (0'0,28'12] local-lis/les=57/58 n=1 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[5.f( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[11.1( v 42'2 (0'0,42'2] local-lis/les=57/58 n=1 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[5.2( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[5.7( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[4.c( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[3.a( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[4.d( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[4.a( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[4.5( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[3.d( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[11.5( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[3.c( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[8.1b( v 28'12 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[5.16( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[3.10( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[11.4( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[11.1b( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[8.18( v 28'12 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[3.13( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[5.15( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[11.1a( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[4.13( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[11.1c( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[5.11( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[5.10( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[3.14( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[11.1e( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[3.16( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[11.1d( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[5.1f( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[8.19( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=28'12 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 58 pg[4.e( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:02 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dcc002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:02 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:03 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Nov 25 09:36:03 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Nov 25 09:36:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:03.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:03 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Nov 25 09:36:03 compute-1 ceph-mon[79643]: 3.1b scrub starts
Nov 25 09:36:03 compute-1 ceph-mon[79643]: 3.1b scrub ok
Nov 25 09:36:03 compute-1 ceph-mon[79643]: 10.17 deep-scrub starts
Nov 25 09:36:03 compute-1 ceph-mon[79643]: 10.17 deep-scrub ok
Nov 25 09:36:03 compute-1 ceph-mon[79643]: osdmap e58: 3 total, 3 up, 3 in
Nov 25 09:36:03 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 09:36:03 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 09:36:03 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 59 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:03 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 59 pg[6.a( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:03 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 59 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:03 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 59 pg[6.2( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:03 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:03.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:04 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 25 09:36:04 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 25 09:36:04 compute-1 ceph-mon[79643]: 7.1a deep-scrub starts
Nov 25 09:36:04 compute-1 ceph-mon[79643]: 7.1a deep-scrub ok
Nov 25 09:36:04 compute-1 ceph-mon[79643]: pgmap v59: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 526 B/s rd, 0 op/s
Nov 25 09:36:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 09:36:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 09:36:04 compute-1 ceph-mon[79643]: osdmap e59: 3 total, 3 up, 3 in
Nov 25 09:36:04 compute-1 ceph-mon[79643]: 8.13 scrub starts
Nov 25 09:36:04 compute-1 ceph-mon[79643]: 8.13 scrub ok
Nov 25 09:36:04 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Nov 25 09:36:04 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 60 pg[6.a( v 42'42 (0'0,42'42] local-lis/les=59/60 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:04 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 60 pg[6.2( v 42'42 (0'0,42'42] local-lis/les=59/60 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:04 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 60 pg[6.6( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=59/60 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=42'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:04 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 60 pg[6.e( v 42'42 lc 41'13 (0'0,42'42] local-lis/les=59/60 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:04 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:04 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:05 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 25 09:36:05 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 25 09:36:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:05.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:05 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Nov 25 09:36:05 compute-1 ceph-mon[79643]: 10.16 scrub starts
Nov 25 09:36:05 compute-1 ceph-mon[79643]: 10.16 scrub ok
Nov 25 09:36:05 compute-1 ceph-mon[79643]: 11.11 scrub starts
Nov 25 09:36:05 compute-1 ceph-mon[79643]: 11.11 scrub ok
Nov 25 09:36:05 compute-1 ceph-mon[79643]: osdmap e60: 3 total, 3 up, 3 in
Nov 25 09:36:05 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 09:36:05 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 09:36:05 compute-1 ceph-mon[79643]: 11.15 deep-scrub starts
Nov 25 09:36:05 compute-1 ceph-mon[79643]: 11.15 deep-scrub ok
Nov 25 09:36:05 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 61 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:05 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:05 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 61 pg[6.7( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:05 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 61 pg[6.3( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:05 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dcc003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:05.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:05 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.15 deep-scrub starts
Nov 25 09:36:06 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.15 deep-scrub ok
Nov 25 09:36:06 compute-1 ceph-mon[79643]: 8.f scrub starts
Nov 25 09:36:06 compute-1 ceph-mon[79643]: 8.f scrub ok
Nov 25 09:36:06 compute-1 ceph-mon[79643]: 7.19 scrub starts
Nov 25 09:36:06 compute-1 ceph-mon[79643]: 7.19 scrub ok
Nov 25 09:36:06 compute-1 ceph-mon[79643]: pgmap v62: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:36:06 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 09:36:06 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 09:36:06 compute-1 ceph-mon[79643]: osdmap e61: 3 total, 3 up, 3 in
Nov 25 09:36:06 compute-1 ceph-mon[79643]: 4.7 scrub starts
Nov 25 09:36:06 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Nov 25 09:36:06 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 62 pg[6.7( v 42'42 lc 41'14 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:06 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 62 pg[6.3( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=2 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=42'42 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:06 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 62 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=42'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:06 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 62 pg[6.f( v 42'42 lc 41'1 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:06 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:06 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:07 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Nov 25 09:36:07 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Nov 25 09:36:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:07.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:07 compute-1 ceph-mon[79643]: 8.6 scrub starts
Nov 25 09:36:07 compute-1 ceph-mon[79643]: 8.6 scrub ok
Nov 25 09:36:07 compute-1 ceph-mon[79643]: 12.15 deep-scrub starts
Nov 25 09:36:07 compute-1 ceph-mon[79643]: 12.15 deep-scrub ok
Nov 25 09:36:07 compute-1 ceph-mon[79643]: 4.7 scrub ok
Nov 25 09:36:07 compute-1 ceph-mon[79643]: osdmap e62: 3 total, 3 up, 3 in
Nov 25 09:36:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Nov 25 09:36:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:07 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0006440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/093607 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:36:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:07.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:08 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Nov 25 09:36:08 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Nov 25 09:36:08 compute-1 ceph-mon[79643]: 12.1a scrub starts
Nov 25 09:36:08 compute-1 ceph-mon[79643]: 12.1a scrub ok
Nov 25 09:36:08 compute-1 ceph-mon[79643]: 12.14 scrub starts
Nov 25 09:36:08 compute-1 ceph-mon[79643]: 12.14 scrub ok
Nov 25 09:36:08 compute-1 ceph-mon[79643]: pgmap v65: 337 pgs: 4 peering, 333 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 465 B/s, 3 keys/s, 6 objects/s recovering
Nov 25 09:36:08 compute-1 ceph-mon[79643]: 11.9 deep-scrub starts
Nov 25 09:36:08 compute-1 ceph-mon[79643]: 11.9 deep-scrub ok
Nov 25 09:36:08 compute-1 ceph-mon[79643]: osdmap e63: 3 total, 3 up, 3 in
Nov 25 09:36:08 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Nov 25 09:36:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:08 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dcc003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:08 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:09 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 25 09:36:09 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 25 09:36:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:09.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:09 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Nov 25 09:36:09 compute-1 ceph-mon[79643]: 12.18 scrub starts
Nov 25 09:36:09 compute-1 ceph-mon[79643]: 12.18 scrub ok
Nov 25 09:36:09 compute-1 ceph-mon[79643]: 7.1c deep-scrub starts
Nov 25 09:36:09 compute-1 ceph-mon[79643]: 7.1c deep-scrub ok
Nov 25 09:36:09 compute-1 ceph-mon[79643]: 11.b deep-scrub starts
Nov 25 09:36:09 compute-1 ceph-mon[79643]: 11.b deep-scrub ok
Nov 25 09:36:09 compute-1 ceph-mon[79643]: osdmap e64: 3 total, 3 up, 3 in
Nov 25 09:36:09 compute-1 ceph-mon[79643]: 11.c scrub starts
Nov 25 09:36:09 compute-1 ceph-mon[79643]: 11.c scrub ok
Nov 25 09:36:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:09 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:36:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:09.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:36:10 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 25 09:36:10 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 25 09:36:10 compute-1 ceph-mon[79643]: 3.1a scrub starts
Nov 25 09:36:10 compute-1 ceph-mon[79643]: 3.1a scrub ok
Nov 25 09:36:10 compute-1 ceph-mon[79643]: 10.e scrub starts
Nov 25 09:36:10 compute-1 ceph-mon[79643]: 10.e scrub ok
Nov 25 09:36:10 compute-1 ceph-mon[79643]: pgmap v68: 337 pgs: 4 peering, 333 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 465 B/s, 3 keys/s, 6 objects/s recovering
Nov 25 09:36:10 compute-1 ceph-mon[79643]: osdmap e65: 3 total, 3 up, 3 in
Nov 25 09:36:10 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:10 compute-1 ceph-mon[79643]: 11.0 scrub starts
Nov 25 09:36:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:10 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0006440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:10 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dcc003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:11 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 25 09:36:11 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 25 09:36:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:11.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:11 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Nov 25 09:36:11 compute-1 ceph-mon[79643]: 3.15 scrub starts
Nov 25 09:36:11 compute-1 ceph-mon[79643]: 3.15 scrub ok
Nov 25 09:36:11 compute-1 ceph-mon[79643]: 7.1 scrub starts
Nov 25 09:36:11 compute-1 ceph-mon[79643]: 7.1 scrub ok
Nov 25 09:36:11 compute-1 ceph-mon[79643]: 11.0 scrub ok
Nov 25 09:36:11 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 09:36:11 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 09:36:11 compute-1 ceph-mon[79643]: 4.f deep-scrub starts
Nov 25 09:36:11 compute-1 ceph-mon[79643]: 4.f deep-scrub ok
Nov 25 09:36:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:11 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:11.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:12 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.7 deep-scrub starts
Nov 25 09:36:12 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.7 deep-scrub ok
Nov 25 09:36:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:12 compute-1 ceph-mon[79643]: 5.13 scrub starts
Nov 25 09:36:12 compute-1 ceph-mon[79643]: 5.13 scrub ok
Nov 25 09:36:12 compute-1 ceph-mon[79643]: 10.c scrub starts
Nov 25 09:36:12 compute-1 ceph-mon[79643]: 10.c scrub ok
Nov 25 09:36:12 compute-1 ceph-mon[79643]: pgmap v70: 337 pgs: 337 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 386 B/s, 1 keys/s, 9 objects/s recovering
Nov 25 09:36:12 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 09:36:12 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 09:36:12 compute-1 ceph-mon[79643]: osdmap e66: 3 total, 3 up, 3 in
Nov 25 09:36:12 compute-1 ceph-mon[79643]: 8.1 scrub starts
Nov 25 09:36:12 compute-1 ceph-mon[79643]: 8.1 scrub ok
Nov 25 09:36:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:12 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:12 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:13 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 25 09:36:13 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 25 09:36:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:13.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:13 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Nov 25 09:36:13 compute-1 ceph-mon[79643]: 5.12 scrub starts
Nov 25 09:36:13 compute-1 ceph-mon[79643]: 5.12 scrub ok
Nov 25 09:36:13 compute-1 ceph-mon[79643]: 7.7 deep-scrub starts
Nov 25 09:36:13 compute-1 ceph-mon[79643]: 7.7 deep-scrub ok
Nov 25 09:36:13 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 09:36:13 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 09:36:13 compute-1 ceph-mon[79643]: 8.0 scrub starts
Nov 25 09:36:13 compute-1 ceph-mon[79643]: 8.0 scrub ok
Nov 25 09:36:13 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 67 pg[6.d( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=67) [0] r=0 lpr=67 pi=[57,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:13 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 67 pg[6.5( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=67) [0] r=0 lpr=67 pi=[57,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:13 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dcc0045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:13.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:14 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 25 09:36:14 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 25 09:36:14 compute-1 ceph-mon[79643]: 3.11 deep-scrub starts
Nov 25 09:36:14 compute-1 ceph-mon[79643]: 3.11 deep-scrub ok
Nov 25 09:36:14 compute-1 ceph-mon[79643]: 10.a scrub starts
Nov 25 09:36:14 compute-1 ceph-mon[79643]: 10.a scrub ok
Nov 25 09:36:14 compute-1 ceph-mon[79643]: pgmap v72: 337 pgs: 337 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 319 B/s, 1 keys/s, 7 objects/s recovering
Nov 25 09:36:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 09:36:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 09:36:14 compute-1 ceph-mon[79643]: osdmap e67: 3 total, 3 up, 3 in
Nov 25 09:36:14 compute-1 ceph-mon[79643]: 8.e scrub starts
Nov 25 09:36:14 compute-1 ceph-mon[79643]: 8.e scrub ok
Nov 25 09:36:14 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Nov 25 09:36:14 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 68 pg[6.d( v 42'42 lc 41'7 (0'0,42'42] local-lis/les=67/68 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=67) [0] r=0 lpr=67 pi=[57,67)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:14 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 68 pg[6.5( v 42'42 lc 41'6 (0'0,42'42] local-lis/les=67/68 n=2 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=67) [0] r=0 lpr=67 pi=[57,67)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:14 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:14 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de00075c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:15 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.f scrub starts
Nov 25 09:36:15 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.f scrub ok
Nov 25 09:36:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:15.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:15 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Nov 25 09:36:15 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 69 pg[6.6( v 42'42 (0'0,42'42] local-lis/les=59/60 n=2 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=12.915654182s) [1] r=-1 lpr=69 pi=[59,69)/1 crt=42'42 mlcod 42'42 active pruub 239.476913452s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:15 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 69 pg[6.6( v 42'42 (0'0,42'42] local-lis/les=59/60 n=2 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=12.915621758s) [1] r=-1 lpr=69 pi=[59,69)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 239.476913452s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:15 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 69 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=59/60 n=1 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=12.915237427s) [1] r=-1 lpr=69 pi=[59,69)/1 crt=42'42 mlcod 42'42 active pruub 239.476928711s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:15 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 69 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=59/60 n=1 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=12.915117264s) [1] r=-1 lpr=69 pi=[59,69)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 239.476928711s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:15 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 69 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=69) [0] r=0 lpr=69 pi=[51,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:15 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 69 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=69) [0] r=0 lpr=69 pi=[51,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:15 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 69 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=69) [0] r=0 lpr=69 pi=[51,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:15 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 69 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=69) [0] r=0 lpr=69 pi=[51,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:15 compute-1 ceph-mon[79643]: 3.e scrub starts
Nov 25 09:36:15 compute-1 ceph-mon[79643]: 3.e scrub ok
Nov 25 09:36:15 compute-1 ceph-mon[79643]: 10.9 scrub starts
Nov 25 09:36:15 compute-1 ceph-mon[79643]: 10.9 scrub ok
Nov 25 09:36:15 compute-1 ceph-mon[79643]: osdmap e68: 3 total, 3 up, 3 in
Nov 25 09:36:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:36:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 09:36:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 09:36:15 compute-1 ceph-mon[79643]: 11.d deep-scrub starts
Nov 25 09:36:15 compute-1 ceph-mon[79643]: 11.d deep-scrub ok
Nov 25 09:36:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:15 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:15 : epoch 69257854 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:36:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:15.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:16 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.0 deep-scrub starts
Nov 25 09:36:16 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.0 deep-scrub ok
Nov 25 09:36:16 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Nov 25 09:36:16 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=-1 lpr=70 pi=[51,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:16 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=-1 lpr=70 pi=[51,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:16 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 70 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=-1 lpr=70 pi=[51,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:16 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 70 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=-1 lpr=70 pi=[51,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:16 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 70 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=-1 lpr=70 pi=[51,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:16 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 70 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=-1 lpr=70 pi=[51,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:16 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 70 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=-1 lpr=70 pi=[51,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:16 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 70 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=-1 lpr=70 pi=[51,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:16 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dcc0045b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:16 compute-1 ceph-mon[79643]: 5.8 scrub starts
Nov 25 09:36:16 compute-1 ceph-mon[79643]: 5.8 scrub ok
Nov 25 09:36:16 compute-1 ceph-mon[79643]: 12.f scrub starts
Nov 25 09:36:16 compute-1 ceph-mon[79643]: 12.f scrub ok
Nov 25 09:36:16 compute-1 ceph-mon[79643]: pgmap v75: 337 pgs: 337 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 321 B/s, 1 keys/s, 7 objects/s recovering
Nov 25 09:36:16 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 09:36:16 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 09:36:16 compute-1 ceph-mon[79643]: osdmap e69: 3 total, 3 up, 3 in
Nov 25 09:36:16 compute-1 ceph-mon[79643]: 5.b scrub starts
Nov 25 09:36:16 compute-1 ceph-mon[79643]: 5.b scrub ok
Nov 25 09:36:16 compute-1 ceph-mon[79643]: 11.6 scrub starts
Nov 25 09:36:16 compute-1 ceph-mon[79643]: 11.6 scrub ok
Nov 25 09:36:16 compute-1 ceph-mon[79643]: osdmap e70: 3 total, 3 up, 3 in
Nov 25 09:36:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:16 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004350 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:17 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 25 09:36:17 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 25 09:36:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:17.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Nov 25 09:36:17 compute-1 ceph-mon[79643]: 12.0 deep-scrub starts
Nov 25 09:36:17 compute-1 ceph-mon[79643]: 12.0 deep-scrub ok
Nov 25 09:36:17 compute-1 ceph-mon[79643]: 5.0 scrub starts
Nov 25 09:36:17 compute-1 ceph-mon[79643]: 5.0 scrub ok
Nov 25 09:36:17 compute-1 ceph-mon[79643]: 4.0 scrub starts
Nov 25 09:36:17 compute-1 ceph-mon[79643]: 4.0 scrub ok
Nov 25 09:36:17 compute-1 ceph-mon[79643]: osdmap e71: 3 total, 3 up, 3 in
Nov 25 09:36:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:17 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0007e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Nov 25 09:36:17 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 72 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72) [0] r=0 lpr=72 pi=[51,72)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:17 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 72 pg[9.e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72) [0] r=0 lpr=72 pi=[51,72)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:17 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 72 pg[9.6( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72) [0] r=0 lpr=72 pi=[51,72)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:17 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 72 pg[9.6( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72) [0] r=0 lpr=72 pi=[51,72)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:17 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 72 pg[9.e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72) [0] r=0 lpr=72 pi=[51,72)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:17 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 72 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72) [0] r=0 lpr=72 pi=[51,72)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:17 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 72 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72) [0] r=0 lpr=72 pi=[51,72)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:17 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 72 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72) [0] r=0 lpr=72 pi=[51,72)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:17.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:18 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 25 09:36:18 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 25 09:36:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:18 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:18 compute-1 ceph-mon[79643]: 10.6 scrub starts
Nov 25 09:36:18 compute-1 ceph-mon[79643]: 10.6 scrub ok
Nov 25 09:36:18 compute-1 ceph-mon[79643]: pgmap v78: 337 pgs: 4 unknown, 4 active+remapped, 2 peering, 327 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 188 B/s, 7 objects/s recovering
Nov 25 09:36:18 compute-1 ceph-mon[79643]: mgrmap e30: compute-0.zcfgby(active, since 92s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:36:18 compute-1 ceph-mon[79643]: osdmap e72: 3 total, 3 up, 3 in
Nov 25 09:36:18 compute-1 ceph-mon[79643]: 3.0 scrub starts
Nov 25 09:36:18 compute-1 ceph-mon[79643]: 3.0 scrub ok
Nov 25 09:36:18 compute-1 ceph-mon[79643]: 8.7 deep-scrub starts
Nov 25 09:36:18 compute-1 ceph-mon[79643]: 8.7 deep-scrub ok
Nov 25 09:36:18 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Nov 25 09:36:18 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 73 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72) [0] r=0 lpr=72 pi=[51,72)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:18 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 73 pg[9.e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=6 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72) [0] r=0 lpr=72 pi=[51,72)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:18 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 73 pg[9.6( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=6 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72) [0] r=0 lpr=72 pi=[51,72)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:18 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 73 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72) [0] r=0 lpr=72 pi=[51,72)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:18 : epoch 69257854 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:36:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:18 : epoch 69257854 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:36:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:18 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:18 compute-1 sudo[91695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:36:18 compute-1 sudo[91695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:18 compute-1 sudo[91695]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:19 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.0 deep-scrub starts
Nov 25 09:36:19 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.0 deep-scrub ok
Nov 25 09:36:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:19.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:19 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004350 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:19 compute-1 ceph-mon[79643]: 7.d scrub starts
Nov 25 09:36:19 compute-1 ceph-mon[79643]: 7.d scrub ok
Nov 25 09:36:19 compute-1 ceph-mon[79643]: osdmap e73: 3 total, 3 up, 3 in
Nov 25 09:36:19 compute-1 ceph-mon[79643]: 5.4 scrub starts
Nov 25 09:36:19 compute-1 ceph-mon[79643]: 5.4 scrub ok
Nov 25 09:36:19 compute-1 ceph-mon[79643]: 4.b scrub starts
Nov 25 09:36:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:19.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:20 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Nov 25 09:36:20 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Nov 25 09:36:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:20 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0007e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:20 compute-1 ceph-mon[79643]: pgmap v82: 337 pgs: 4 unknown, 4 active+remapped, 2 peering, 327 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 203 B/s, 7 objects/s recovering
Nov 25 09:36:20 compute-1 ceph-mon[79643]: 10.0 deep-scrub starts
Nov 25 09:36:20 compute-1 ceph-mon[79643]: 10.0 deep-scrub ok
Nov 25 09:36:20 compute-1 ceph-mon[79643]: 4.b scrub ok
Nov 25 09:36:20 compute-1 ceph-mon[79643]: 5.e scrub starts
Nov 25 09:36:20 compute-1 ceph-mon[79643]: 5.e scrub ok
Nov 25 09:36:20 compute-1 ceph-mon[79643]: 11.1f scrub starts
Nov 25 09:36:20 compute-1 ceph-mon[79643]: 11.1f scrub ok
Nov 25 09:36:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:20 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dcc0052c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:21 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 25 09:36:21 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 25 09:36:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:21.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:21 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:21 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Nov 25 09:36:21 compute-1 ceph-mon[79643]: 7.0 scrub starts
Nov 25 09:36:21 compute-1 ceph-mon[79643]: 7.0 scrub ok
Nov 25 09:36:21 compute-1 ceph-mon[79643]: 3.9 scrub starts
Nov 25 09:36:21 compute-1 ceph-mon[79643]: 3.9 scrub ok
Nov 25 09:36:21 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 09:36:21 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 09:36:21 compute-1 ceph-mon[79643]: 4.10 scrub starts
Nov 25 09:36:21 compute-1 ceph-mon[79643]: 4.10 scrub ok
Nov 25 09:36:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:21 : epoch 69257854 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:36:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:21.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:22 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 25 09:36:22 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 25 09:36:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:22 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004350 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:22 compute-1 ceph-mon[79643]: pgmap v83: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.3 KiB/s wr, 81 op/s; 46 B/s, 4 objects/s recovering
Nov 25 09:36:22 compute-1 ceph-mon[79643]: 10.d scrub starts
Nov 25 09:36:22 compute-1 ceph-mon[79643]: 10.d scrub ok
Nov 25 09:36:22 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 09:36:22 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 09:36:22 compute-1 ceph-mon[79643]: osdmap e74: 3 total, 3 up, 3 in
Nov 25 09:36:22 compute-1 ceph-mon[79643]: 5.d scrub starts
Nov 25 09:36:22 compute-1 ceph-mon[79643]: 5.d scrub ok
Nov 25 09:36:22 compute-1 ceph-mon[79643]: 11.2 scrub starts
Nov 25 09:36:22 compute-1 ceph-mon[79643]: 11.2 scrub ok
Nov 25 09:36:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:22 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de00088e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:23 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.d scrub starts
Nov 25 09:36:23 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.d scrub ok
Nov 25 09:36:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:23.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:23 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dcc0052c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:23 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Nov 25 09:36:23 compute-1 ceph-mon[79643]: 10.b scrub starts
Nov 25 09:36:23 compute-1 ceph-mon[79643]: 10.b scrub ok
Nov 25 09:36:23 compute-1 ceph-mon[79643]: 5.1a scrub starts
Nov 25 09:36:23 compute-1 ceph-mon[79643]: 5.1a scrub ok
Nov 25 09:36:23 compute-1 ceph-mon[79643]: pgmap v85: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.1 KiB/s wr, 67 op/s; 38 B/s, 3 objects/s recovering
Nov 25 09:36:23 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 09:36:23 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 09:36:23 compute-1 ceph-mon[79643]: 4.11 scrub starts
Nov 25 09:36:23 compute-1 ceph-mon[79643]: 4.11 scrub ok
Nov 25 09:36:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:23.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:23 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 75 pg[6.8( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=75) [0] r=0 lpr=75 pi=[49,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:24 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 25 09:36:24 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 25 09:36:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:24 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:24 compute-1 ceph-mon[79643]: 12.d scrub starts
Nov 25 09:36:24 compute-1 ceph-mon[79643]: 12.d scrub ok
Nov 25 09:36:24 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 09:36:24 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 09:36:24 compute-1 ceph-mon[79643]: osdmap e75: 3 total, 3 up, 3 in
Nov 25 09:36:24 compute-1 ceph-mon[79643]: 11.16 scrub starts
Nov 25 09:36:24 compute-1 ceph-mon[79643]: 11.16 scrub ok
Nov 25 09:36:24 compute-1 ceph-mon[79643]: 8.1d scrub starts
Nov 25 09:36:24 compute-1 ceph-mon[79643]: 8.1d scrub ok
Nov 25 09:36:24 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Nov 25 09:36:24 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 76 pg[6.8( v 42'42 (0'0,42'42] local-lis/les=75/76 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=75) [0] r=0 lpr=75 pi=[49,75)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:24 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db40044f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:25 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Nov 25 09:36:25 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Nov 25 09:36:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:25.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:25 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de00088e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:25 compute-1 ceph-mon[79643]: 7.c scrub starts
Nov 25 09:36:25 compute-1 ceph-mon[79643]: 7.c scrub ok
Nov 25 09:36:25 compute-1 ceph-mon[79643]: osdmap e76: 3 total, 3 up, 3 in
Nov 25 09:36:25 compute-1 ceph-mon[79643]: 11.17 scrub starts
Nov 25 09:36:25 compute-1 ceph-mon[79643]: 11.17 scrub ok
Nov 25 09:36:25 compute-1 ceph-mon[79643]: pgmap v88: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1023 B/s wr, 63 op/s; 36 B/s, 3 objects/s recovering
Nov 25 09:36:25 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 09:36:25 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 09:36:25 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Nov 25 09:36:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:36:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:25.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:36:26 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Nov 25 09:36:26 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Nov 25 09:36:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:26 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dcc0052c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:26 compute-1 ceph-mon[79643]: 12.5 scrub starts
Nov 25 09:36:26 compute-1 ceph-mon[79643]: 12.5 scrub ok
Nov 25 09:36:26 compute-1 ceph-mon[79643]: 4.12 scrub starts
Nov 25 09:36:26 compute-1 ceph-mon[79643]: 4.12 scrub ok
Nov 25 09:36:26 compute-1 ceph-mon[79643]: 12.11 deep-scrub starts
Nov 25 09:36:26 compute-1 ceph-mon[79643]: 12.11 deep-scrub ok
Nov 25 09:36:26 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 09:36:26 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 09:36:26 compute-1 ceph-mon[79643]: osdmap e77: 3 total, 3 up, 3 in
Nov 25 09:36:26 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Nov 25 09:36:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:26 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:27 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Nov 25 09:36:27 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Nov 25 09:36:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:27.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:27 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004510 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Nov 25 09:36:27 compute-1 ceph-mon[79643]: 10.7 deep-scrub starts
Nov 25 09:36:27 compute-1 ceph-mon[79643]: 10.7 deep-scrub ok
Nov 25 09:36:27 compute-1 ceph-mon[79643]: 8.1e deep-scrub starts
Nov 25 09:36:27 compute-1 ceph-mon[79643]: 8.1e deep-scrub ok
Nov 25 09:36:27 compute-1 ceph-mon[79643]: 8.16 scrub starts
Nov 25 09:36:27 compute-1 ceph-mon[79643]: 8.16 scrub ok
Nov 25 09:36:27 compute-1 ceph-mon[79643]: osdmap e78: 3 total, 3 up, 3 in
Nov 25 09:36:27 compute-1 ceph-mon[79643]: pgmap v91: 337 pgs: 2 unknown, 2 active+remapped, 1 peering, 332 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 25 09:36:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/093627 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:36:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:36:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:27.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:36:28 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.15 deep-scrub starts
Nov 25 09:36:28 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.15 deep-scrub ok
Nov 25 09:36:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:28 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de00090e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:28 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dcc0052c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:28 compute-1 ceph-mon[79643]: 12.1 scrub starts
Nov 25 09:36:28 compute-1 ceph-mon[79643]: 12.1 scrub ok
Nov 25 09:36:28 compute-1 ceph-mon[79643]: 4.16 deep-scrub starts
Nov 25 09:36:28 compute-1 ceph-mon[79643]: 4.16 deep-scrub ok
Nov 25 09:36:28 compute-1 ceph-mon[79643]: 8.15 scrub starts
Nov 25 09:36:28 compute-1 ceph-mon[79643]: 8.15 scrub ok
Nov 25 09:36:28 compute-1 ceph-mon[79643]: osdmap e79: 3 total, 3 up, 3 in
Nov 25 09:36:28 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Nov 25 09:36:29 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.1f deep-scrub starts
Nov 25 09:36:29 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.1f deep-scrub ok
Nov 25 09:36:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:29.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:29 compute-1 ceph-mon[79643]: 7.15 deep-scrub starts
Nov 25 09:36:29 compute-1 ceph-mon[79643]: 7.15 deep-scrub ok
Nov 25 09:36:29 compute-1 ceph-mon[79643]: 8.1a scrub starts
Nov 25 09:36:29 compute-1 ceph-mon[79643]: 8.1a scrub ok
Nov 25 09:36:29 compute-1 ceph-mon[79643]: 2.a scrub starts
Nov 25 09:36:29 compute-1 ceph-mon[79643]: 2.a scrub ok
Nov 25 09:36:29 compute-1 ceph-mon[79643]: osdmap e80: 3 total, 3 up, 3 in
Nov 25 09:36:29 compute-1 ceph-mon[79643]: pgmap v94: 337 pgs: 2 unknown, 2 active+remapped, 1 peering, 332 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 25 09:36:29 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Nov 25 09:36:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:29.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:30 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 25 09:36:30 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 25 09:36:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:30 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:30 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc0092f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:30 compute-1 ceph-mon[79643]: 12.1f deep-scrub starts
Nov 25 09:36:30 compute-1 ceph-mon[79643]: 12.1f deep-scrub ok
Nov 25 09:36:30 compute-1 ceph-mon[79643]: 11.18 scrub starts
Nov 25 09:36:30 compute-1 ceph-mon[79643]: 11.18 scrub ok
Nov 25 09:36:30 compute-1 ceph-mon[79643]: 11.3 scrub starts
Nov 25 09:36:30 compute-1 ceph-mon[79643]: 11.3 scrub ok
Nov 25 09:36:30 compute-1 ceph-mon[79643]: osdmap e81: 3 total, 3 up, 3 in
Nov 25 09:36:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:36:31 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 25 09:36:31 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 25 09:36:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/093631 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:36:31 compute-1 sudo[91756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:36:31 compute-1 sudo[91756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:31 compute-1 sudo[91756]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:31.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:31 compute-1 sudo[91781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:36:31 compute-1 sudo[91781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:31 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dcc0052c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:31 compute-1 podman[91860]: 2025-11-25 09:36:31.644437322 +0000 UTC m=+0.039144523 container exec 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 25 09:36:31 compute-1 podman[91860]: 2025-11-25 09:36:31.723591408 +0000 UTC m=+0.118298611 container exec_died 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:36:31 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Nov 25 09:36:31 compute-1 ceph-mon[79643]: 2.11 scrub starts
Nov 25 09:36:31 compute-1 ceph-mon[79643]: 2.11 scrub ok
Nov 25 09:36:31 compute-1 ceph-mon[79643]: 4.17 scrub starts
Nov 25 09:36:31 compute-1 ceph-mon[79643]: 4.17 scrub ok
Nov 25 09:36:31 compute-1 ceph-mon[79643]: 12.4 scrub starts
Nov 25 09:36:31 compute-1 ceph-mon[79643]: 12.4 scrub ok
Nov 25 09:36:31 compute-1 ceph-mon[79643]: pgmap v96: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 37 op/s; 75 B/s, 2 objects/s recovering
Nov 25 09:36:31 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 09:36:31 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 09:36:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:31.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:32 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 25 09:36:32 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 25 09:36:32 compute-1 podman[91971]: 2025-11-25 09:36:32.076529568 +0000 UTC m=+0.035341717 container exec 48c3be01eb68c77d87f12f950cadd5a9f0be42049d86ff37bececa6f3d988615 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:36:32 compute-1 podman[91971]: 2025-11-25 09:36:32.080822569 +0000 UTC m=+0.039634708 container exec_died 48c3be01eb68c77d87f12f950cadd5a9f0be42049d86ff37bececa6f3d988615 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:36:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:32 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 82 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82) [0] r=0 lpr=82 pi=[51,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:32 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 82 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82) [0] r=0 lpr=82 pi=[51,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:32 compute-1 podman[92031]: 2025-11-25 09:36:32.240295077 +0000 UTC m=+0.033940736 container exec 97b4060ec7cbc126fd630094c624ea203d3cf63dbbedc2dd06eeb5cecf3a3665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:36:32 compute-1 podman[92031]: 2025-11-25 09:36:32.250612575 +0000 UTC m=+0.044258213 container exec_died 97b4060ec7cbc126fd630094c624ea203d3cf63dbbedc2dd06eeb5cecf3a3665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:36:32 compute-1 podman[92082]: 2025-11-25 09:36:32.391510862 +0000 UTC m=+0.035342027 container exec 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 09:36:32 compute-1 podman[92082]: 2025-11-25 09:36:32.400617818 +0000 UTC m=+0.044448963 container exec_died 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 09:36:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:32 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:32 compute-1 podman[92139]: 2025-11-25 09:36:32.530720597 +0000 UTC m=+0.032628563 container exec 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.openshift.expose-services=, vcs-type=git, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, release=1793)
Nov 25 09:36:32 compute-1 podman[92139]: 2025-11-25 09:36:32.543648605 +0000 UTC m=+0.045556561 container exec_died 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., release=1793, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, distribution-scope=public, version=2.2.4)
Nov 25 09:36:32 compute-1 sudo[91781]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:32 compute-1 sudo[92165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:36:32 compute-1 sudo[92165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:32 compute-1 sudo[92165]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:32 compute-1 sudo[92190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:36:32 compute-1 sudo[92190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Nov 25 09:36:32 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 83 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=-1 lpr=83 pi=[51,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:32 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 83 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=-1 lpr=83 pi=[51,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:32 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 83 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=-1 lpr=83 pi=[51,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:32 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 83 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=-1 lpr=83 pi=[51,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:32 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de00090e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:32 compute-1 ceph-mon[79643]: 7.17 scrub starts
Nov 25 09:36:32 compute-1 ceph-mon[79643]: 7.17 scrub ok
Nov 25 09:36:32 compute-1 ceph-mon[79643]: 5.1e scrub starts
Nov 25 09:36:32 compute-1 ceph-mon[79643]: 5.1e scrub ok
Nov 25 09:36:32 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 09:36:32 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 09:36:32 compute-1 ceph-mon[79643]: osdmap e82: 3 total, 3 up, 3 in
Nov 25 09:36:32 compute-1 ceph-mon[79643]: 2.b scrub starts
Nov 25 09:36:32 compute-1 ceph-mon[79643]: 2.b scrub ok
Nov 25 09:36:32 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:32 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:32 compute-1 ceph-mon[79643]: osdmap e83: 3 total, 3 up, 3 in
Nov 25 09:36:33 compute-1 sudo[92190]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:33 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 25 09:36:33 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 25 09:36:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:36:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:33.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:36:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:33 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dd4003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:33 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Nov 25 09:36:33 compute-1 ceph-mon[79643]: 10.1a scrub starts
Nov 25 09:36:33 compute-1 ceph-mon[79643]: 10.1a scrub ok
Nov 25 09:36:33 compute-1 ceph-mon[79643]: 3.18 scrub starts
Nov 25 09:36:33 compute-1 ceph-mon[79643]: 3.18 scrub ok
Nov 25 09:36:33 compute-1 ceph-mon[79643]: 12.13 scrub starts
Nov 25 09:36:33 compute-1 ceph-mon[79643]: 12.13 scrub ok
Nov 25 09:36:33 compute-1 ceph-mon[79643]: pgmap v99: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 38 op/s; 76 B/s, 2 objects/s recovering
Nov 25 09:36:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 09:36:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 09:36:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:36:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:36:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:36:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:36:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:36:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 09:36:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 09:36:33 compute-1 ceph-mon[79643]: osdmap e84: 3 total, 3 up, 3 in
Nov 25 09:36:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:36:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:33.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:36:34 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 25 09:36:34 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 84 pg[6.b( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84 pruub=12.342535019s) [1] r=-1 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 42'42 active pruub 257.496704102s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:34 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 84 pg[6.b( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84 pruub=12.342506409s) [1] r=-1 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 257.496704102s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:34 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 25 09:36:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:34 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dcc0052c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:34 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004570 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:34 compute-1 ceph-mon[79643]: 2.14 scrub starts
Nov 25 09:36:34 compute-1 ceph-mon[79643]: 2.14 scrub ok
Nov 25 09:36:34 compute-1 ceph-mon[79643]: 3.19 scrub starts
Nov 25 09:36:34 compute-1 ceph-mon[79643]: 3.19 scrub ok
Nov 25 09:36:34 compute-1 ceph-mon[79643]: 8.2 scrub starts
Nov 25 09:36:34 compute-1 ceph-mon[79643]: 8.2 scrub ok
Nov 25 09:36:34 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Nov 25 09:36:34 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:34 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:34 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:34 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:35 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 25 09:36:35 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 25 09:36:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:35.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:35 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de00090e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:35 compute-1 ceph-mon[79643]: 10.1c scrub starts
Nov 25 09:36:35 compute-1 ceph-mon[79643]: 10.1c scrub ok
Nov 25 09:36:35 compute-1 ceph-mon[79643]: 10.13 deep-scrub starts
Nov 25 09:36:35 compute-1 ceph-mon[79643]: 10.13 deep-scrub ok
Nov 25 09:36:35 compute-1 ceph-mon[79643]: 8.3 scrub starts
Nov 25 09:36:35 compute-1 ceph-mon[79643]: 8.3 scrub ok
Nov 25 09:36:35 compute-1 ceph-mon[79643]: osdmap e85: 3 total, 3 up, 3 in
Nov 25 09:36:35 compute-1 ceph-mon[79643]: pgmap v102: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:36:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 09:36:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 09:36:35 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Nov 25 09:36:35 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:35 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:35.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:36 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Nov 25 09:36:36 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Nov 25 09:36:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:36 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0001080 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:36 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dcc0052c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:36 compute-1 sudo[92246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:36:36 compute-1 sudo[92246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:36 compute-1 sudo[92246]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:36 compute-1 ceph-mon[79643]: 10.1d scrub starts
Nov 25 09:36:36 compute-1 ceph-mon[79643]: 10.1d scrub ok
Nov 25 09:36:36 compute-1 ceph-mon[79643]: 7.1e scrub starts
Nov 25 09:36:36 compute-1 ceph-mon[79643]: 7.1e scrub ok
Nov 25 09:36:36 compute-1 ceph-mon[79643]: 2.1c deep-scrub starts
Nov 25 09:36:36 compute-1 ceph-mon[79643]: 2.1c deep-scrub ok
Nov 25 09:36:36 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 09:36:36 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 09:36:36 compute-1 ceph-mon[79643]: osdmap e86: 3 total, 3 up, 3 in
Nov 25 09:36:36 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:36 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:37 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 25 09:36:37 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 25 09:36:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:37.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:37 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004570 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:37.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Nov 25 09:36:37 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:37 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:37 compute-1 ceph-mon[79643]: 12.1b scrub starts
Nov 25 09:36:37 compute-1 ceph-mon[79643]: 12.1b scrub ok
Nov 25 09:36:37 compute-1 ceph-mon[79643]: 5.1d scrub starts
Nov 25 09:36:37 compute-1 ceph-mon[79643]: 5.1d scrub ok
Nov 25 09:36:37 compute-1 ceph-mon[79643]: 10.1 scrub starts
Nov 25 09:36:37 compute-1 ceph-mon[79643]: 10.1 scrub ok
Nov 25 09:36:37 compute-1 ceph-mon[79643]: pgmap v104: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 49 B/s, 3 objects/s recovering
Nov 25 09:36:37 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 09:36:37 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 09:36:38 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 25 09:36:38 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 25 09:36:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:38 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004570 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:38 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0001080 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:38 compute-1 ceph-mon[79643]: 2.16 scrub starts
Nov 25 09:36:38 compute-1 ceph-mon[79643]: 2.16 scrub ok
Nov 25 09:36:38 compute-1 ceph-mon[79643]: 7.18 scrub starts
Nov 25 09:36:38 compute-1 ceph-mon[79643]: 7.18 scrub ok
Nov 25 09:36:38 compute-1 ceph-mon[79643]: 8.9 scrub starts
Nov 25 09:36:38 compute-1 ceph-mon[79643]: 8.9 scrub ok
Nov 25 09:36:38 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 09:36:38 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 09:36:38 compute-1 ceph-mon[79643]: osdmap e87: 3 total, 3 up, 3 in
Nov 25 09:36:38 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Nov 25 09:36:38 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:38 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:38 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:38 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:38 compute-1 sudo[92272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:36:38 compute-1 sudo[92272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:38 compute-1 sudo[92272]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:39 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Nov 25 09:36:39 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Nov 25 09:36:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:39.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:39 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0001080 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:39.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:39 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Nov 25 09:36:39 compute-1 ceph-mon[79643]: 7.12 scrub starts
Nov 25 09:36:39 compute-1 ceph-mon[79643]: 7.12 scrub ok
Nov 25 09:36:39 compute-1 ceph-mon[79643]: 12.12 scrub starts
Nov 25 09:36:39 compute-1 ceph-mon[79643]: 12.12 scrub ok
Nov 25 09:36:39 compute-1 ceph-mon[79643]: 11.a scrub starts
Nov 25 09:36:39 compute-1 ceph-mon[79643]: 11.a scrub ok
Nov 25 09:36:39 compute-1 ceph-mon[79643]: osdmap e88: 3 total, 3 up, 3 in
Nov 25 09:36:39 compute-1 ceph-mon[79643]: pgmap v107: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 3 objects/s recovering
Nov 25 09:36:39 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 09:36:39 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 09:36:39 compute-1 ceph-mon[79643]: 2.e scrub starts
Nov 25 09:36:39 compute-1 ceph-mon[79643]: 2.e scrub ok
Nov 25 09:36:40 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 25 09:36:40 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 25 09:36:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:40 : epoch 69257854 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:36:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:40 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dcc0052c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:40 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dcc0052c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:40 compute-1 ceph-mon[79643]: 2.17 deep-scrub starts
Nov 25 09:36:40 compute-1 ceph-mon[79643]: 2.17 deep-scrub ok
Nov 25 09:36:40 compute-1 ceph-mon[79643]: 4.2 scrub starts
Nov 25 09:36:40 compute-1 ceph-mon[79643]: 4.2 scrub ok
Nov 25 09:36:40 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 09:36:40 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 09:36:40 compute-1 ceph-mon[79643]: osdmap e89: 3 total, 3 up, 3 in
Nov 25 09:36:40 compute-1 ceph-mon[79643]: 7.b deep-scrub starts
Nov 25 09:36:40 compute-1 ceph-mon[79643]: 7.b deep-scrub ok
Nov 25 09:36:40 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Nov 25 09:36:40 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:40 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:40 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:40 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:41 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Nov 25 09:36:41 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Nov 25 09:36:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:41.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:41 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:41 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009280 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:41.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:41 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Nov 25 09:36:41 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:41 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:41 compute-1 ceph-mon[79643]: 10.1f scrub starts
Nov 25 09:36:41 compute-1 ceph-mon[79643]: 10.1f scrub ok
Nov 25 09:36:41 compute-1 ceph-mon[79643]: 7.5 deep-scrub starts
Nov 25 09:36:41 compute-1 ceph-mon[79643]: 7.5 deep-scrub ok
Nov 25 09:36:41 compute-1 ceph-mon[79643]: osdmap e90: 3 total, 3 up, 3 in
Nov 25 09:36:41 compute-1 ceph-mon[79643]: pgmap v110: 337 pgs: 2 peering, 335 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s; 82 B/s, 3 objects/s recovering
Nov 25 09:36:41 compute-1 ceph-mon[79643]: 5.6 scrub starts
Nov 25 09:36:41 compute-1 ceph-mon[79643]: 5.6 scrub ok
Nov 25 09:36:41 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 lc 41'13 (0'0,42'42] local-lis/les=89/91 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:42 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Nov 25 09:36:42 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Nov 25 09:36:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:42 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004570 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:42 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004570 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:42 compute-1 ceph-mon[79643]: 12.16 scrub starts
Nov 25 09:36:42 compute-1 ceph-mon[79643]: 12.16 scrub ok
Nov 25 09:36:42 compute-1 ceph-mon[79643]: 8.a scrub starts
Nov 25 09:36:42 compute-1 ceph-mon[79643]: 8.a scrub ok
Nov 25 09:36:42 compute-1 ceph-mon[79643]: osdmap e91: 3 total, 3 up, 3 in
Nov 25 09:36:42 compute-1 ceph-mon[79643]: 7.4 scrub starts
Nov 25 09:36:42 compute-1 ceph-mon[79643]: 7.4 scrub ok
Nov 25 09:36:43 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 25 09:36:43 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 25 09:36:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:43 : epoch 69257854 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:36:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:43 : epoch 69257854 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:36:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:43 : epoch 69257854 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:36:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:43.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:43 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc03b630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:43.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:43 compute-1 ceph-mon[79643]: 5.18 deep-scrub starts
Nov 25 09:36:43 compute-1 ceph-mon[79643]: 5.18 deep-scrub ok
Nov 25 09:36:43 compute-1 ceph-mon[79643]: 11.8 deep-scrub starts
Nov 25 09:36:43 compute-1 ceph-mon[79643]: 11.8 deep-scrub ok
Nov 25 09:36:43 compute-1 ceph-mon[79643]: pgmap v112: 337 pgs: 2 peering, 335 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 962 B/s rd, 962 B/s wr, 1 op/s; 77 B/s, 3 objects/s recovering
Nov 25 09:36:43 compute-1 ceph-mon[79643]: 3.1 scrub starts
Nov 25 09:36:43 compute-1 ceph-mon[79643]: 3.1 scrub ok
Nov 25 09:36:44 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 25 09:36:44 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 25 09:36:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:44 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de4002660 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:44 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de4002660 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:44 compute-1 ceph-mon[79643]: 4.18 scrub starts
Nov 25 09:36:44 compute-1 ceph-mon[79643]: 4.18 scrub ok
Nov 25 09:36:44 compute-1 ceph-mon[79643]: 8.d scrub starts
Nov 25 09:36:44 compute-1 ceph-mon[79643]: 8.d scrub ok
Nov 25 09:36:44 compute-1 ceph-mon[79643]: 8.17 scrub starts
Nov 25 09:36:44 compute-1 ceph-mon[79643]: 8.17 scrub ok
Nov 25 09:36:44 compute-1 ceph-mon[79643]: 12.e deep-scrub starts
Nov 25 09:36:44 compute-1 ceph-mon[79643]: 12.e deep-scrub ok
Nov 25 09:36:45 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 25 09:36:45 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 25 09:36:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:45.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:45 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004570 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:45.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:45 compute-1 ceph-mon[79643]: 12.9 deep-scrub starts
Nov 25 09:36:45 compute-1 ceph-mon[79643]: 12.9 deep-scrub ok
Nov 25 09:36:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:36:45 compute-1 ceph-mon[79643]: pgmap v113: 337 pgs: 2 peering, 335 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 682 B/s wr, 1 op/s; 54 B/s, 2 objects/s recovering
Nov 25 09:36:45 compute-1 ceph-mon[79643]: 8.12 scrub starts
Nov 25 09:36:45 compute-1 ceph-mon[79643]: 8.12 scrub ok
Nov 25 09:36:45 compute-1 ceph-mon[79643]: 10.8 deep-scrub starts
Nov 25 09:36:45 compute-1 ceph-mon[79643]: 10.8 deep-scrub ok
Nov 25 09:36:46 compute-1 sudo[91533]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:46 : epoch 69257854 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:36:46 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 25 09:36:46 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 25 09:36:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:46 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc03bf50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:46 compute-1 sudo[92452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijaepcqylrhhyptujodvgqmckrvuqbhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063406.4546964-370-256147988901742/AnsiballZ_command.py'
Nov 25 09:36:46 compute-1 sudo[92452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:46 compute-1 python3.9[92454]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:36:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:46 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009420 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:46 compute-1 ceph-mon[79643]: 10.f scrub starts
Nov 25 09:36:46 compute-1 ceph-mon[79643]: 10.f scrub ok
Nov 25 09:36:46 compute-1 ceph-mon[79643]: 5.1b scrub starts
Nov 25 09:36:46 compute-1 ceph-mon[79643]: 5.1b scrub ok
Nov 25 09:36:46 compute-1 ceph-mon[79643]: 7.6 scrub starts
Nov 25 09:36:46 compute-1 ceph-mon[79643]: 7.6 scrub ok
Nov 25 09:36:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:47 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 25 09:36:47 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 25 09:36:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:47.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:47 compute-1 sudo[92452]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:47 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de4002660 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:47.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Nov 25 09:36:47 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:47 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:47 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92 pruub=14.425540924s) [1] r=-1 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 42'42 active pruub 273.496917725s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:47 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92 pruub=14.425458908s) [1] r=-1 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 273.496917725s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:47 compute-1 ceph-mon[79643]: 11.e scrub starts
Nov 25 09:36:47 compute-1 ceph-mon[79643]: 11.e scrub ok
Nov 25 09:36:47 compute-1 ceph-mon[79643]: pgmap v114: 337 pgs: 337 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 45 B/s, 1 objects/s recovering
Nov 25 09:36:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 09:36:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 09:36:47 compute-1 ceph-mon[79643]: 11.12 scrub starts
Nov 25 09:36:47 compute-1 ceph-mon[79643]: 11.12 scrub ok
Nov 25 09:36:47 compute-1 ceph-mon[79643]: 5.5 scrub starts
Nov 25 09:36:47 compute-1 ceph-mon[79643]: 5.5 scrub ok
Nov 25 09:36:48 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Nov 25 09:36:48 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Nov 25 09:36:48 compute-1 sudo[92740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giuhrhaxllhoewreyjsvyicvyfyxjear ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063407.754139-394-32171776030318/AnsiballZ_selinux.py'
Nov 25 09:36:48 compute-1 sudo[92740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:48 compute-1 python3.9[92742]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 09:36:48 compute-1 sudo[92740]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:48 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:48 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc03bf50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:48 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Nov 25 09:36:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:48 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:48 compute-1 ceph-mon[79643]: 2.d scrub starts
Nov 25 09:36:48 compute-1 ceph-mon[79643]: 2.d scrub ok
Nov 25 09:36:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 09:36:48 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 09:36:48 compute-1 ceph-mon[79643]: osdmap e92: 3 total, 3 up, 3 in
Nov 25 09:36:48 compute-1 ceph-mon[79643]: 12.c deep-scrub starts
Nov 25 09:36:48 compute-1 ceph-mon[79643]: 5.1c deep-scrub starts
Nov 25 09:36:48 compute-1 ceph-mon[79643]: 12.c deep-scrub ok
Nov 25 09:36:48 compute-1 ceph-mon[79643]: 5.1c deep-scrub ok
Nov 25 09:36:48 compute-1 ceph-mon[79643]: osdmap e93: 3 total, 3 up, 3 in
Nov 25 09:36:49 compute-1 sudo[92892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avuvmjrdaaieaqnyqjwcvrapbdaqvouc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063408.841768-427-172310463766361/AnsiballZ_command.py'
Nov 25 09:36:49 compute-1 sudo[92892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:49 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 25 09:36:49 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 25 09:36:49 compute-1 python3.9[92894]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 09:36:49 compute-1 sudo[92892]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:49.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:49 compute-1 sudo[93045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jagtwlkpbzpmsaocbljiatnwadneoyzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063409.382584-451-102405153917726/AnsiballZ_file.py'
Nov 25 09:36:49 compute-1 sudo[93045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:49 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009420 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:49 compute-1 python3.9[93047]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:36:49 compute-1 sudo[93045]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:36:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:49.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:36:49 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Nov 25 09:36:49 compute-1 ceph-mon[79643]: 12.3 scrub starts
Nov 25 09:36:49 compute-1 ceph-mon[79643]: 12.3 scrub ok
Nov 25 09:36:49 compute-1 ceph-mon[79643]: pgmap v117: 337 pgs: 337 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 09:36:49 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 25 09:36:49 compute-1 ceph-mon[79643]: 11.14 scrub starts
Nov 25 09:36:49 compute-1 ceph-mon[79643]: 11.14 scrub ok
Nov 25 09:36:49 compute-1 ceph-mon[79643]: 3.6 scrub starts
Nov 25 09:36:49 compute-1 ceph-mon[79643]: 3.6 scrub ok
Nov 25 09:36:49 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 25 09:36:49 compute-1 ceph-mon[79643]: osdmap e94: 3 total, 3 up, 3 in
Nov 25 09:36:50 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:50 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 25 09:36:50 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 25 09:36:50 compute-1 sudo[93197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuxazonuymukvhmxdwlkvdevhscsqbpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063409.912101-475-276138839878406/AnsiballZ_mount.py'
Nov 25 09:36:50 compute-1 sudo[93197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:50 compute-1 python3.9[93199]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 09:36:50 compute-1 sudo[93197]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:50 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de4004150 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:50 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db40045b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:50 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Nov 25 09:36:50 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:50 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:50 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:50 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:50 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:50 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:50 compute-1 ceph-mon[79643]: 2.c scrub starts
Nov 25 09:36:50 compute-1 ceph-mon[79643]: 2.c scrub ok
Nov 25 09:36:50 compute-1 ceph-mon[79643]: 3.1c scrub starts
Nov 25 09:36:50 compute-1 ceph-mon[79643]: 3.1c scrub ok
Nov 25 09:36:50 compute-1 ceph-mon[79643]: 7.2 scrub starts
Nov 25 09:36:50 compute-1 ceph-mon[79643]: 7.2 scrub ok
Nov 25 09:36:51 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 25 09:36:51 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 25 09:36:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:51.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:51 compute-1 sudo[93350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-airzhfgqjdsitcjjybiojnbpfbvmqtqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063411.3912976-559-98325380789950/AnsiballZ_file.py'
Nov 25 09:36:51 compute-1 sudo[93350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:51 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc03bf50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:51 compute-1 python3.9[93352]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:36:51 compute-1 sudo[93350]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:51.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:51 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Nov 25 09:36:51 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:51 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:51 compute-1 ceph-mon[79643]: 4.9 scrub starts
Nov 25 09:36:51 compute-1 ceph-mon[79643]: 4.9 scrub ok
Nov 25 09:36:51 compute-1 ceph-mon[79643]: osdmap e95: 3 total, 3 up, 3 in
Nov 25 09:36:51 compute-1 ceph-mon[79643]: pgmap v120: 337 pgs: 1 unknown, 2 active+remapped, 334 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 3 objects/s recovering
Nov 25 09:36:51 compute-1 ceph-mon[79643]: 8.14 scrub starts
Nov 25 09:36:51 compute-1 ceph-mon[79643]: 8.14 scrub ok
Nov 25 09:36:51 compute-1 ceph-mon[79643]: 7.e scrub starts
Nov 25 09:36:51 compute-1 ceph-mon[79643]: 7.e scrub ok
Nov 25 09:36:51 compute-1 ceph-mon[79643]: osdmap e96: 3 total, 3 up, 3 in
Nov 25 09:36:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:52 compute-1 sudo[93502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wurtinztiedaqwwhwtkmbwenxlqzwiej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063411.931993-583-143221206899481/AnsiballZ_stat.py'
Nov 25 09:36:52 compute-1 sudo[93502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:52 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 25 09:36:52 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 25 09:36:52 compute-1 python3.9[93504]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:36:52 compute-1 sudo[93502]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:52 compute-1 sudo[93580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkbuicnldyqacprtsdahcptlbxocmvjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063411.931993-583-143221206899481/AnsiballZ_file.py'
Nov 25 09:36:52 compute-1 sudo[93580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:52 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009420 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:52 compute-1 python3.9[93582]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:36:52 compute-1 sudo[93580]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Nov 25 09:36:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:52 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:52 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009420 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:52 compute-1 ceph-mon[79643]: 2.f scrub starts
Nov 25 09:36:52 compute-1 ceph-mon[79643]: 2.f scrub ok
Nov 25 09:36:52 compute-1 ceph-mon[79643]: 8.10 scrub starts
Nov 25 09:36:52 compute-1 ceph-mon[79643]: 8.10 scrub ok
Nov 25 09:36:52 compute-1 ceph-mon[79643]: 10.2 scrub starts
Nov 25 09:36:52 compute-1 ceph-mon[79643]: 10.2 scrub ok
Nov 25 09:36:52 compute-1 ceph-mon[79643]: osdmap e97: 3 total, 3 up, 3 in
Nov 25 09:36:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/093653 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:36:53 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 25 09:36:53 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 25 09:36:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:53.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:53 compute-1 sudo[93733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvxzgbtmycbwghmfidkxpptgyflnsiwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063413.4190433-646-147294179730992/AnsiballZ_stat.py'
Nov 25 09:36:53 compute-1 sudo[93733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:53 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db40045d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Nov 25 09:36:53 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=97/98 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:53 compute-1 systemd[72489]: Created slice User Background Tasks Slice.
Nov 25 09:36:53 compute-1 systemd[72489]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 09:36:53 compute-1 systemd[72489]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 09:36:53 compute-1 python3.9[93735]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:36:53 compute-1 sudo[93733]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:53.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:53 compute-1 ceph-mon[79643]: 7.a scrub starts
Nov 25 09:36:53 compute-1 ceph-mon[79643]: 7.a scrub ok
Nov 25 09:36:53 compute-1 ceph-mon[79643]: pgmap v123: 337 pgs: 1 unknown, 2 active+remapped, 334 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 3 objects/s recovering
Nov 25 09:36:53 compute-1 ceph-mon[79643]: 7.f scrub starts
Nov 25 09:36:53 compute-1 ceph-mon[79643]: 5.1 scrub starts
Nov 25 09:36:53 compute-1 ceph-mon[79643]: 7.f scrub ok
Nov 25 09:36:53 compute-1 ceph-mon[79643]: 5.1 scrub ok
Nov 25 09:36:53 compute-1 ceph-mon[79643]: osdmap e98: 3 total, 3 up, 3 in
Nov 25 09:36:54 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 25 09:36:54 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 25 09:36:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:54 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc03bf50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:54 compute-1 sudo[93888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emlyvzpyktweqrfkoqzfrqnaevpoyynv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063414.3219616-685-133124963718082/AnsiballZ_getent.py'
Nov 25 09:36:54 compute-1 sudo[93888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:54 compute-1 python3.9[93890]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 09:36:54 compute-1 sudo[93888]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:54 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de4004150 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:54 compute-1 ceph-mon[79643]: 8.5 scrub starts
Nov 25 09:36:54 compute-1 ceph-mon[79643]: 8.5 scrub ok
Nov 25 09:36:54 compute-1 ceph-mon[79643]: 11.f scrub starts
Nov 25 09:36:54 compute-1 ceph-mon[79643]: 11.f scrub ok
Nov 25 09:36:54 compute-1 ceph-mon[79643]: 5.3 scrub starts
Nov 25 09:36:54 compute-1 ceph-mon[79643]: 5.3 scrub ok
Nov 25 09:36:55 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.8 deep-scrub starts
Nov 25 09:36:55 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.8 deep-scrub ok
Nov 25 09:36:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:55.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:55 compute-1 sudo[94042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clwdcpvacymnwhnpoopvixdzoylrxeqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063415.119564-715-193982707826319/AnsiballZ_getent.py'
Nov 25 09:36:55 compute-1 sudo[94042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:55 compute-1 python3.9[94044]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 09:36:55 compute-1 sudo[94042]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:55 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009420 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:55.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:55 compute-1 sudo[94195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkfpoqlnzvwwzvtjdsfzgjaoeqksdbwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063415.6298249-739-199232456581080/AnsiballZ_group.py'
Nov 25 09:36:55 compute-1 sudo[94195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:55 compute-1 ceph-mon[79643]: 4.8 scrub starts
Nov 25 09:36:55 compute-1 ceph-mon[79643]: 4.8 scrub ok
Nov 25 09:36:55 compute-1 ceph-mon[79643]: pgmap v125: 337 pgs: 1 unknown, 2 active+remapped, 334 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:36:55 compute-1 ceph-mon[79643]: 2.4 deep-scrub starts
Nov 25 09:36:55 compute-1 ceph-mon[79643]: 2.4 deep-scrub ok
Nov 25 09:36:55 compute-1 ceph-mon[79643]: 8.8 deep-scrub starts
Nov 25 09:36:55 compute-1 ceph-mon[79643]: 8.8 deep-scrub ok
Nov 25 09:36:56 compute-1 python3.9[94197]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 09:36:56 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 25 09:36:56 compute-1 sudo[94195]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:56 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 25 09:36:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:56 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db40045f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:56 compute-1 sudo[94347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdywsoxdgcidkrhnrciszvhexvlqalyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063416.356028-766-253840936232825/AnsiballZ_file.py'
Nov 25 09:36:56 compute-1 sudo[94347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:56 compute-1 python3.9[94349]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 09:36:56 compute-1 sudo[94347]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:56 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c6410 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:56 compute-1 ceph-mon[79643]: 11.19 scrub starts
Nov 25 09:36:56 compute-1 ceph-mon[79643]: 11.19 scrub ok
Nov 25 09:36:56 compute-1 ceph-mon[79643]: 3.f scrub starts
Nov 25 09:36:56 compute-1 ceph-mon[79643]: 3.f scrub ok
Nov 25 09:36:56 compute-1 ceph-mon[79643]: 12.8 scrub starts
Nov 25 09:36:56 compute-1 ceph-mon[79643]: 12.8 scrub ok
Nov 25 09:36:57 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 25 09:36:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:57 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 25 09:36:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:36:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:57.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:36:57 compute-1 sudo[94500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frbxpabomodcdmkwxjeqmpenqiwohgma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063417.1319814-799-30383127142806/AnsiballZ_dnf.py'
Nov 25 09:36:57 compute-1 sudo[94500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:57 compute-1 python3.9[94502]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:36:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:57 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de4004150 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:57.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:58 compute-1 ceph-mon[79643]: 7.16 scrub starts
Nov 25 09:36:58 compute-1 ceph-mon[79643]: 7.16 scrub ok
Nov 25 09:36:58 compute-1 ceph-mon[79643]: 5.9 scrub starts
Nov 25 09:36:58 compute-1 ceph-mon[79643]: 5.9 scrub ok
Nov 25 09:36:58 compute-1 ceph-mon[79643]: pgmap v126: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 13 op/s; 0 B/s, 0 objects/s recovering
Nov 25 09:36:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 25 09:36:58 compute-1 ceph-mon[79643]: 2.6 scrub starts
Nov 25 09:36:58 compute-1 ceph-mon[79643]: 2.6 scrub ok
Nov 25 09:36:58 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Nov 25 09:36:58 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Nov 25 09:36:58 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Nov 25 09:36:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:58 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009420 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:58 compute-1 sudo[94500]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:58 compute-1 sudo[94653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygyxpxtyjwynyjrcsqmvhmijetcszgbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063418.6529143-823-24474115002092/AnsiballZ_file.py'
Nov 25 09:36:58 compute-1 sudo[94653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:58 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004610 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:58 compute-1 python3.9[94655]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:36:58 compute-1 sudo[94653]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:59 compute-1 ceph-mon[79643]: 12.1e scrub starts
Nov 25 09:36:59 compute-1 ceph-mon[79643]: 12.1e scrub ok
Nov 25 09:36:59 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 25 09:36:59 compute-1 ceph-mon[79643]: osdmap e99: 3 total, 3 up, 3 in
Nov 25 09:36:59 compute-1 ceph-mon[79643]: 8.4 deep-scrub starts
Nov 25 09:36:59 compute-1 ceph-mon[79643]: 8.4 deep-scrub ok
Nov 25 09:36:59 compute-1 ceph-mon[79643]: 7.3 deep-scrub starts
Nov 25 09:36:59 compute-1 ceph-mon[79643]: 7.3 deep-scrub ok
Nov 25 09:36:59 compute-1 sudo[94656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:36:59 compute-1 sudo[94656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:59 compute-1 sudo[94656]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:59 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.7 deep-scrub starts
Nov 25 09:36:59 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.7 deep-scrub ok
Nov 25 09:36:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:59.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:59 compute-1 sudo[94831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znrgmvrqcucskzqmyeqrbdztwwtmxxiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063419.166016-848-144912622106487/AnsiballZ_stat.py'
Nov 25 09:36:59 compute-1 sudo[94831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:59 compute-1 python3.9[94833]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:36:59 compute-1 sudo[94831]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:36:59 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c6410 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:59 compute-1 sudo[94909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlemwjvafnqpngllchcgzyecfxuayqkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063419.166016-848-144912622106487/AnsiballZ_file.py'
Nov 25 09:36:59 compute-1 sudo[94909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:59 compute-1 python3.9[94911]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:36:59 compute-1 sudo[94909]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:36:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:36:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:59.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:36:59 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:00 compute-1 ceph-mon[79643]: 2.13 deep-scrub starts
Nov 25 09:37:00 compute-1 ceph-mon[79643]: 2.13 deep-scrub ok
Nov 25 09:37:00 compute-1 ceph-mon[79643]: 11.7 deep-scrub starts
Nov 25 09:37:00 compute-1 ceph-mon[79643]: 11.7 deep-scrub ok
Nov 25 09:37:00 compute-1 ceph-mon[79643]: pgmap v128: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 13 op/s; 0 B/s, 0 objects/s recovering
Nov 25 09:37:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 25 09:37:00 compute-1 ceph-mon[79643]: 5.c scrub starts
Nov 25 09:37:00 compute-1 ceph-mon[79643]: 5.c scrub ok
Nov 25 09:37:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:37:00 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Nov 25 09:37:00 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:00 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:00 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:00 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 25 09:37:00 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 25 09:37:00 compute-1 sudo[95061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqkaqxgwbveyxdoetzwdzlajjnjgcglu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063420.0795543-886-114701919627935/AnsiballZ_stat.py'
Nov 25 09:37:00 compute-1 sudo[95061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:00 compute-1 python3.9[95063]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:37:00 compute-1 sudo[95061]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:00 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de40055e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:00 compute-1 sudo[95139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzjwcofxaddiyyykdcpsjuksfzobdkez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063420.0795543-886-114701919627935/AnsiballZ_file.py'
Nov 25 09:37:00 compute-1 sudo[95139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:00 compute-1 python3.9[95141]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:37:00 compute-1 sudo[95139]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:00 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c6410 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:01 compute-1 ceph-mon[79643]: 4.14 scrub starts
Nov 25 09:37:01 compute-1 ceph-mon[79643]: 4.14 scrub ok
Nov 25 09:37:01 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 25 09:37:01 compute-1 ceph-mon[79643]: osdmap e100: 3 total, 3 up, 3 in
Nov 25 09:37:01 compute-1 ceph-mon[79643]: 5.f scrub starts
Nov 25 09:37:01 compute-1 ceph-mon[79643]: 5.f scrub ok
Nov 25 09:37:01 compute-1 ceph-mon[79643]: 7.9 scrub starts
Nov 25 09:37:01 compute-1 ceph-mon[79643]: 7.9 scrub ok
Nov 25 09:37:01 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Nov 25 09:37:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 25 09:37:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 25 09:37:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:01.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:01 compute-1 sudo[95292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skglmpduubiowoplydbkjhovdmswnvyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063421.217292-931-39140169891911/AnsiballZ_dnf.py'
Nov 25 09:37:01 compute-1 sudo[95292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:01 compute-1 python3.9[95294]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:37:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:01 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de40055e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:37:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:01.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:37:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Nov 25 09:37:02 compute-1 ceph-mon[79643]: 7.14 scrub starts
Nov 25 09:37:02 compute-1 ceph-mon[79643]: 7.14 scrub ok
Nov 25 09:37:02 compute-1 ceph-mon[79643]: osdmap e101: 3 total, 3 up, 3 in
Nov 25 09:37:02 compute-1 ceph-mon[79643]: 11.1 scrub starts
Nov 25 09:37:02 compute-1 ceph-mon[79643]: 11.1 scrub ok
Nov 25 09:37:02 compute-1 ceph-mon[79643]: pgmap v131: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 09:37:02 compute-1 ceph-mon[79643]: 5.a scrub starts
Nov 25 09:37:02 compute-1 ceph-mon[79643]: 5.a scrub ok
Nov 25 09:37:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:37:02 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 25 09:37:02 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 25 09:37:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:02 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009420 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:02 compute-1 sudo[95292]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Nov 25 09:37:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:02 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=102/103 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:37:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:02 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004650 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:03 compute-1 ceph-mon[79643]: 12.1d scrub starts
Nov 25 09:37:03 compute-1 ceph-mon[79643]: 12.1d scrub ok
Nov 25 09:37:03 compute-1 ceph-mon[79643]: osdmap e102: 3 total, 3 up, 3 in
Nov 25 09:37:03 compute-1 ceph-mon[79643]: 4.c scrub starts
Nov 25 09:37:03 compute-1 ceph-mon[79643]: 4.c scrub ok
Nov 25 09:37:03 compute-1 ceph-mon[79643]: 10.5 scrub starts
Nov 25 09:37:03 compute-1 ceph-mon[79643]: 10.5 scrub ok
Nov 25 09:37:03 compute-1 ceph-mon[79643]: osdmap e103: 3 total, 3 up, 3 in
Nov 25 09:37:03 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 25 09:37:03 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 25 09:37:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:03.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:03 compute-1 python3.9[95446]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:37:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:03 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c6410 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:03 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Nov 25 09:37:03 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=103/104 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:37:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:37:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:03.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:37:03 compute-1 python3.9[95598]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 09:37:04 compute-1 ceph-mon[79643]: 10.4 scrub starts
Nov 25 09:37:04 compute-1 ceph-mon[79643]: 10.4 scrub ok
Nov 25 09:37:04 compute-1 ceph-mon[79643]: 4.d scrub starts
Nov 25 09:37:04 compute-1 ceph-mon[79643]: 4.d scrub ok
Nov 25 09:37:04 compute-1 ceph-mon[79643]: pgmap v134: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:04 compute-1 ceph-mon[79643]: 7.8 scrub starts
Nov 25 09:37:04 compute-1 ceph-mon[79643]: 7.8 scrub ok
Nov 25 09:37:04 compute-1 ceph-mon[79643]: osdmap e104: 3 total, 3 up, 3 in
Nov 25 09:37:04 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 25 09:37:04 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 25 09:37:04 compute-1 python3.9[95748]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:37:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:04 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c6410 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:04 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de00095c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:05 compute-1 ceph-mon[79643]: 7.11 scrub starts
Nov 25 09:37:05 compute-1 ceph-mon[79643]: 7.11 scrub ok
Nov 25 09:37:05 compute-1 ceph-mon[79643]: 4.a scrub starts
Nov 25 09:37:05 compute-1 ceph-mon[79643]: 4.a scrub ok
Nov 25 09:37:05 compute-1 ceph-mon[79643]: 3.17 scrub starts
Nov 25 09:37:05 compute-1 ceph-mon[79643]: 3.17 scrub ok
Nov 25 09:37:05 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 25 09:37:05 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 25 09:37:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:37:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:05.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:37:05 compute-1 sudo[95899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztffrnjmltbxgxehbfhrwcwxrrdkbutc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063424.9327092-1054-249000770959684/AnsiballZ_systemd.py'
Nov 25 09:37:05 compute-1 sudo[95899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:05 compute-1 python3.9[95901]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:37:05 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 09:37:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:05 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004670 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:05 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 09:37:05 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 09:37:05 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 09:37:05 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 09:37:05 compute-1 sudo[95899]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:05.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:06 compute-1 ceph-mon[79643]: 12.2 scrub starts
Nov 25 09:37:06 compute-1 ceph-mon[79643]: 12.2 scrub ok
Nov 25 09:37:06 compute-1 ceph-mon[79643]: 11.5 scrub starts
Nov 25 09:37:06 compute-1 ceph-mon[79643]: 11.5 scrub ok
Nov 25 09:37:06 compute-1 ceph-mon[79643]: pgmap v136: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:06 compute-1 ceph-mon[79643]: 7.13 scrub starts
Nov 25 09:37:06 compute-1 ceph-mon[79643]: 7.13 scrub ok
Nov 25 09:37:06 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 25 09:37:06 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 25 09:37:06 compute-1 python3.9[96062]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 09:37:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:06 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de40055e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:06 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c7120 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:07 compute-1 ceph-mon[79643]: 2.10 scrub starts
Nov 25 09:37:07 compute-1 ceph-mon[79643]: 2.10 scrub ok
Nov 25 09:37:07 compute-1 ceph-mon[79643]: 8.1b scrub starts
Nov 25 09:37:07 compute-1 ceph-mon[79643]: 8.1b scrub ok
Nov 25 09:37:07 compute-1 ceph-mon[79643]: 12.19 scrub starts
Nov 25 09:37:07 compute-1 ceph-mon[79643]: 12.19 scrub ok
Nov 25 09:37:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:37:07 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 25 09:37:07 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 25 09:37:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:07.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:07 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:07.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:08 compute-1 ceph-mon[79643]: 4.15 scrub starts
Nov 25 09:37:08 compute-1 ceph-mon[79643]: 4.15 scrub ok
Nov 25 09:37:08 compute-1 ceph-mon[79643]: 5.16 scrub starts
Nov 25 09:37:08 compute-1 ceph-mon[79643]: 5.16 scrub ok
Nov 25 09:37:08 compute-1 ceph-mon[79643]: pgmap v137: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 13 op/s; 36 B/s, 1 objects/s recovering
Nov 25 09:37:08 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 25 09:37:08 compute-1 ceph-mon[79643]: 12.1c scrub starts
Nov 25 09:37:08 compute-1 ceph-mon[79643]: 12.1c scrub ok
Nov 25 09:37:08 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Nov 25 09:37:08 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 25 09:37:08 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 25 09:37:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:08 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004670 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:08 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de40055e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:09 compute-1 ceph-mon[79643]: 2.15 scrub starts
Nov 25 09:37:09 compute-1 ceph-mon[79643]: 2.15 scrub ok
Nov 25 09:37:09 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 25 09:37:09 compute-1 ceph-mon[79643]: osdmap e105: 3 total, 3 up, 3 in
Nov 25 09:37:09 compute-1 ceph-mon[79643]: 5.7 scrub starts
Nov 25 09:37:09 compute-1 ceph-mon[79643]: 5.7 scrub ok
Nov 25 09:37:09 compute-1 ceph-mon[79643]: 7.10 scrub starts
Nov 25 09:37:09 compute-1 ceph-mon[79643]: 7.10 scrub ok
Nov 25 09:37:09 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 25 09:37:09 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 25 09:37:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:09.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:09 compute-1 sudo[96214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkfzlbtxqapeguivmwirbxueysuwgkwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063429.4984996-1225-234294081219537/AnsiballZ_systemd.py'
Nov 25 09:37:09 compute-1 sudo[96214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:09 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de40055e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:37:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:09.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:37:09 compute-1 python3.9[96216]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:37:09 compute-1 sudo[96214]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:10 compute-1 ceph-mon[79643]: 10.11 scrub starts
Nov 25 09:37:10 compute-1 ceph-mon[79643]: 10.11 scrub ok
Nov 25 09:37:10 compute-1 ceph-mon[79643]: pgmap v139: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 12 op/s; 34 B/s, 1 objects/s recovering
Nov 25 09:37:10 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 25 09:37:10 compute-1 ceph-mon[79643]: 5.14 scrub starts
Nov 25 09:37:10 compute-1 ceph-mon[79643]: 5.14 scrub ok
Nov 25 09:37:10 compute-1 ceph-mon[79643]: 11.4 scrub starts
Nov 25 09:37:10 compute-1 ceph-mon[79643]: 11.4 scrub ok
Nov 25 09:37:10 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Nov 25 09:37:10 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 25 09:37:10 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 25 09:37:10 compute-1 sudo[96368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mowzepdutlwdhxbxhuhxycqjxptxpibt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063430.046839-1225-266394484651991/AnsiballZ_systemd.py'
Nov 25 09:37:10 compute-1 sudo[96368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:10 compute-1 python3.9[96370]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:37:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:10 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:10 compute-1 sudo[96368]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:10 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4004690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:11 compute-1 sshd-session[89616]: Connection closed by 192.168.122.30 port 44214
Nov 25 09:37:11 compute-1 sshd-session[89613]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:37:11 compute-1 systemd[1]: session-36.scope: Deactivated successfully.
Nov 25 09:37:11 compute-1 systemd[1]: session-36.scope: Consumed 45.850s CPU time.
Nov 25 09:37:11 compute-1 systemd-logind[746]: Session 36 logged out. Waiting for processes to exit.
Nov 25 09:37:11 compute-1 systemd-logind[746]: Removed session 36.
Nov 25 09:37:11 compute-1 ceph-mon[79643]: 2.12 scrub starts
Nov 25 09:37:11 compute-1 ceph-mon[79643]: 2.12 scrub ok
Nov 25 09:37:11 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 25 09:37:11 compute-1 ceph-mon[79643]: osdmap e106: 3 total, 3 up, 3 in
Nov 25 09:37:11 compute-1 ceph-mon[79643]: 3.12 scrub starts
Nov 25 09:37:11 compute-1 ceph-mon[79643]: 3.12 scrub ok
Nov 25 09:37:11 compute-1 ceph-mon[79643]: 5.2 scrub starts
Nov 25 09:37:11 compute-1 ceph-mon[79643]: 5.2 scrub ok
Nov 25 09:37:11 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 25 09:37:11 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 25 09:37:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:11.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:11 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c7120 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:11.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:37:12 compute-1 ceph-mon[79643]: 8.1f scrub starts
Nov 25 09:37:12 compute-1 ceph-mon[79643]: 8.1f scrub ok
Nov 25 09:37:12 compute-1 ceph-mon[79643]: pgmap v141: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 10 op/s; 29 B/s, 1 objects/s recovering
Nov 25 09:37:12 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 25 09:37:12 compute-1 ceph-mon[79643]: 11.1b scrub starts
Nov 25 09:37:12 compute-1 ceph-mon[79643]: 5.17 scrub starts
Nov 25 09:37:12 compute-1 ceph-mon[79643]: 11.1b scrub ok
Nov 25 09:37:12 compute-1 ceph-mon[79643]: 5.17 scrub ok
Nov 25 09:37:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Nov 25 09:37:12 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:12 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.18 deep-scrub starts
Nov 25 09:37:12 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.18 deep-scrub ok
Nov 25 09:37:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:12 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de40055e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Nov 25 09:37:12 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:12 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:12 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:13 compute-1 ceph-mon[79643]: 12.17 scrub starts
Nov 25 09:37:13 compute-1 ceph-mon[79643]: 12.17 scrub ok
Nov 25 09:37:13 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 25 09:37:13 compute-1 ceph-mon[79643]: osdmap e107: 3 total, 3 up, 3 in
Nov 25 09:37:13 compute-1 ceph-mon[79643]: 10.18 scrub starts
Nov 25 09:37:13 compute-1 ceph-mon[79643]: 10.18 scrub ok
Nov 25 09:37:13 compute-1 ceph-mon[79643]: 8.18 deep-scrub starts
Nov 25 09:37:13 compute-1 ceph-mon[79643]: 8.18 deep-scrub ok
Nov 25 09:37:13 compute-1 ceph-mon[79643]: osdmap e108: 3 total, 3 up, 3 in
Nov 25 09:37:13 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 25 09:37:13 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 25 09:37:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:13.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:13 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db40046b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:13 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Nov 25 09:37:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:13.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:14 compute-1 ceph-mon[79643]: 7.1d scrub starts
Nov 25 09:37:14 compute-1 ceph-mon[79643]: 7.1d scrub ok
Nov 25 09:37:14 compute-1 ceph-mon[79643]: 10.19 scrub starts
Nov 25 09:37:14 compute-1 ceph-mon[79643]: 10.19 scrub ok
Nov 25 09:37:14 compute-1 ceph-mon[79643]: pgmap v144: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 25 09:37:14 compute-1 ceph-mon[79643]: 3.13 scrub starts
Nov 25 09:37:14 compute-1 ceph-mon[79643]: 3.13 scrub ok
Nov 25 09:37:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 25 09:37:14 compute-1 ceph-mon[79643]: osdmap e109: 3 total, 3 up, 3 in
Nov 25 09:37:14 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 25 09:37:14 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109 pruub=8.535896301s) [2] r=-1 lpr=109 pi=[72,109)/1 crt=42'1151 mlcod 0'0 active pruub 293.832427979s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:14 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109 pruub=8.535872459s) [2] r=-1 lpr=109 pi=[72,109)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 293.832427979s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:14 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 25 09:37:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:14 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c7120 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:14 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:15 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Nov 25 09:37:15 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 25 09:37:15 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:15 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:15 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:15 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:15 compute-1 ceph-mon[79643]: 10.10 scrub starts
Nov 25 09:37:15 compute-1 ceph-mon[79643]: 10.10 scrub ok
Nov 25 09:37:15 compute-1 ceph-mon[79643]: 5.15 scrub starts
Nov 25 09:37:15 compute-1 ceph-mon[79643]: 10.1b scrub starts
Nov 25 09:37:15 compute-1 ceph-mon[79643]: 5.15 scrub ok
Nov 25 09:37:15 compute-1 ceph-mon[79643]: 10.1b scrub ok
Nov 25 09:37:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:37:15 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 25 09:37:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:15.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:15 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:15.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:16 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Nov 25 09:37:16 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:37:16 compute-1 ceph-mon[79643]: 2.18 scrub starts
Nov 25 09:37:16 compute-1 ceph-mon[79643]: 2.18 scrub ok
Nov 25 09:37:16 compute-1 ceph-mon[79643]: osdmap e110: 3 total, 3 up, 3 in
Nov 25 09:37:16 compute-1 ceph-mon[79643]: 11.1a scrub starts
Nov 25 09:37:16 compute-1 ceph-mon[79643]: 7.1b scrub starts
Nov 25 09:37:16 compute-1 ceph-mon[79643]: 11.1a scrub ok
Nov 25 09:37:16 compute-1 ceph-mon[79643]: pgmap v147: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:16 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 25 09:37:16 compute-1 ceph-mon[79643]: 7.1b scrub ok
Nov 25 09:37:16 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 25 09:37:16 compute-1 ceph-mon[79643]: osdmap e111: 3 total, 3 up, 3 in
Nov 25 09:37:16 compute-1 sshd-session[96402]: Accepted publickey for zuul from 192.168.122.30 port 39070 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:37:16 compute-1 systemd-logind[746]: New session 37 of user zuul.
Nov 25 09:37:16 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 25 09:37:16 compute-1 systemd[1]: Started Session 37 of User zuul.
Nov 25 09:37:16 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 25 09:37:16 compute-1 sshd-session[96402]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:37:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:16 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0002130 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:16 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0002130 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:16 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:37:16 compute-1 python3.9[96555]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:37:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:37:17 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.13 deep-scrub starts
Nov 25 09:37:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Nov 25 09:37:17 compute-1 ceph-mon[79643]: 4.1f scrub starts
Nov 25 09:37:17 compute-1 ceph-mon[79643]: 4.1f scrub ok
Nov 25 09:37:17 compute-1 ceph-mon[79643]: 2.1e scrub starts
Nov 25 09:37:17 compute-1 ceph-mon[79643]: 2.1e scrub ok
Nov 25 09:37:17 compute-1 ceph-mon[79643]: 3.10 scrub starts
Nov 25 09:37:17 compute-1 ceph-mon[79643]: 3.10 scrub ok
Nov 25 09:37:17 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112 pruub=15.742835999s) [2] async=[2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 42'1151 active pruub 304.033843994s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:17 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112 pruub=15.742785454s) [2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 304.033843994s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:17 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.13 deep-scrub ok
Nov 25 09:37:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:17.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:17 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:17 compute-1 sudo[96710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idescgfqkcnlhlcjaopzqpjcxhvypfjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063437.4976254-69-213500751271876/AnsiballZ_getent.py'
Nov 25 09:37:17 compute-1 sudo[96710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:17.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:17 compute-1 python3.9[96712]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 09:37:17 compute-1 sudo[96710]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:18 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Nov 25 09:37:18 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Nov 25 09:37:18 compute-1 ceph-mon[79643]: 8.11 scrub starts
Nov 25 09:37:18 compute-1 ceph-mon[79643]: 8.11 scrub ok
Nov 25 09:37:18 compute-1 ceph-mon[79643]: 5.19 deep-scrub starts
Nov 25 09:37:18 compute-1 ceph-mon[79643]: 5.19 deep-scrub ok
Nov 25 09:37:18 compute-1 ceph-mon[79643]: pgmap v149: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 0 objects/s recovering
Nov 25 09:37:18 compute-1 ceph-mon[79643]: 4.13 deep-scrub starts
Nov 25 09:37:18 compute-1 ceph-mon[79643]: osdmap e112: 3 total, 3 up, 3 in
Nov 25 09:37:18 compute-1 ceph-mon[79643]: 4.13 deep-scrub ok
Nov 25 09:37:18 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Nov 25 09:37:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:18 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:18 compute-1 sudo[96863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxzflclurfhztgtbspvqrszmptczfjht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063438.368268-105-185956366581315/AnsiballZ_setup.py'
Nov 25 09:37:18 compute-1 sudo[96863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:18 compute-1 python3.9[96865]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:37:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:18 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0002130 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:18 compute-1 sudo[96863]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:19 compute-1 sudo[96874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:19 compute-1 sudo[96874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:19 compute-1 sudo[96874]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:19 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Nov 25 09:37:19 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Nov 25 09:37:19 compute-1 ceph-mon[79643]: 7.1f scrub starts
Nov 25 09:37:19 compute-1 ceph-mon[79643]: 7.1f scrub ok
Nov 25 09:37:19 compute-1 ceph-mon[79643]: 12.b deep-scrub starts
Nov 25 09:37:19 compute-1 ceph-mon[79643]: 12.b deep-scrub ok
Nov 25 09:37:19 compute-1 ceph-mon[79643]: 11.1c deep-scrub starts
Nov 25 09:37:19 compute-1 ceph-mon[79643]: 11.1c deep-scrub ok
Nov 25 09:37:19 compute-1 ceph-mon[79643]: osdmap e113: 3 total, 3 up, 3 in
Nov 25 09:37:19 compute-1 sudo[96973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdhgtqzddnwmcfzullxfckpiptwdlont ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063438.368268-105-185956366581315/AnsiballZ_dnf.py'
Nov 25 09:37:19 compute-1 sudo[96973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:19.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:19 compute-1 python3.9[96975]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 09:37:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:19 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0002130 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:19.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:20 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 25 09:37:20 compute-1 ceph-mon[79643]: 11.13 scrub starts
Nov 25 09:37:20 compute-1 ceph-mon[79643]: 11.13 scrub ok
Nov 25 09:37:20 compute-1 ceph-mon[79643]: 5.10 deep-scrub starts
Nov 25 09:37:20 compute-1 ceph-mon[79643]: 10.15 scrub starts
Nov 25 09:37:20 compute-1 ceph-mon[79643]: 5.10 deep-scrub ok
Nov 25 09:37:20 compute-1 ceph-mon[79643]: 10.15 scrub ok
Nov 25 09:37:20 compute-1 ceph-mon[79643]: pgmap v152: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 25 09:37:20 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 25 09:37:20 compute-1 sudo[96973]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:20 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:20 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:20 compute-1 sudo[97126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izcbwdsfdpsqvivcqrnwriygcycjppnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063440.7462447-147-18843194425540/AnsiballZ_dnf.py'
Nov 25 09:37:20 compute-1 sudo[97126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:21 compute-1 python3.9[97128]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:37:21 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 25 09:37:21 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 25 09:37:21 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Nov 25 09:37:21 compute-1 ceph-mon[79643]: 10.12 scrub starts
Nov 25 09:37:21 compute-1 ceph-mon[79643]: 10.12 scrub ok
Nov 25 09:37:21 compute-1 ceph-mon[79643]: 10.14 scrub starts
Nov 25 09:37:21 compute-1 ceph-mon[79643]: 10.14 scrub ok
Nov 25 09:37:21 compute-1 ceph-mon[79643]: 11.1e scrub starts
Nov 25 09:37:21 compute-1 ceph-mon[79643]: 11.1e scrub ok
Nov 25 09:37:21 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 25 09:37:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:21.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:21 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0002130 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:21.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:37:22 compute-1 sudo[97126]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:22 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 25 09:37:22 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 25 09:37:22 compute-1 ceph-mon[79643]: 10.1e scrub starts
Nov 25 09:37:22 compute-1 ceph-mon[79643]: 10.1e scrub ok
Nov 25 09:37:22 compute-1 ceph-mon[79643]: 3.14 scrub starts
Nov 25 09:37:22 compute-1 ceph-mon[79643]: 12.a scrub starts
Nov 25 09:37:22 compute-1 ceph-mon[79643]: 3.14 scrub ok
Nov 25 09:37:22 compute-1 ceph-mon[79643]: 12.a scrub ok
Nov 25 09:37:22 compute-1 ceph-mon[79643]: pgmap v153: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 0 objects/s recovering
Nov 25 09:37:22 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 25 09:37:22 compute-1 ceph-mon[79643]: osdmap e114: 3 total, 3 up, 3 in
Nov 25 09:37:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:22 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c72c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:22 compute-1 sudo[97280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfvqsfzdoxkhqzmlrjlmkwpqmjbywyqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063442.2514992-171-147297101104697/AnsiballZ_systemd.py'
Nov 25 09:37:22 compute-1 sudo[97280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:22 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:22 compute-1 python3.9[97282]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:37:22 compute-1 sudo[97280]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:23 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 25 09:37:23 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 25 09:37:23 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Nov 25 09:37:23 compute-1 ceph-mon[79643]: 8.1c scrub starts
Nov 25 09:37:23 compute-1 ceph-mon[79643]: 8.1c scrub ok
Nov 25 09:37:23 compute-1 ceph-mon[79643]: 12.6 scrub starts
Nov 25 09:37:23 compute-1 ceph-mon[79643]: 12.6 scrub ok
Nov 25 09:37:23 compute-1 ceph-mon[79643]: 3.16 scrub starts
Nov 25 09:37:23 compute-1 ceph-mon[79643]: 3.16 scrub ok
Nov 25 09:37:23 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 25 09:37:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:37:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:23.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:37:23 compute-1 python3.9[97436]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:37:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:23 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:23.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:24 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 25 09:37:24 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 25 09:37:24 compute-1 sudo[97586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibcfchhlalyqgiqhgnwarklnmxhdqxhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063443.820272-225-266618349121713/AnsiballZ_sefcontext.py'
Nov 25 09:37:24 compute-1 sudo[97586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:24 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Nov 25 09:37:24 compute-1 ceph-mon[79643]: 10.3 scrub starts
Nov 25 09:37:24 compute-1 ceph-mon[79643]: 10.3 scrub ok
Nov 25 09:37:24 compute-1 ceph-mon[79643]: 12.10 scrub starts
Nov 25 09:37:24 compute-1 ceph-mon[79643]: 4.e scrub starts
Nov 25 09:37:24 compute-1 ceph-mon[79643]: 4.e scrub ok
Nov 25 09:37:24 compute-1 ceph-mon[79643]: 12.10 scrub ok
Nov 25 09:37:24 compute-1 ceph-mon[79643]: pgmap v155: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 25 09:37:24 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 25 09:37:24 compute-1 ceph-mon[79643]: osdmap e115: 3 total, 3 up, 3 in
Nov 25 09:37:24 compute-1 ceph-mon[79643]: osdmap e116: 3 total, 3 up, 3 in
Nov 25 09:37:24 compute-1 python3.9[97588]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 09:37:24 compute-1 sudo[97586]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:24 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:24 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c72c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:25 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 25 09:37:25 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 25 09:37:25 compute-1 python3.9[97738]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:37:25 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Nov 25 09:37:25 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 117 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117 pruub=14.654538155s) [1] r=-1 lpr=117 pi=[85,117)/1 crt=42'1151 mlcod 0'0 active pruub 310.990783691s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:25 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 117 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117 pruub=14.654507637s) [1] r=-1 lpr=117 pi=[85,117)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 310.990783691s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:25 compute-1 ceph-mon[79643]: 12.7 scrub starts
Nov 25 09:37:25 compute-1 ceph-mon[79643]: 12.7 scrub ok
Nov 25 09:37:25 compute-1 ceph-mon[79643]: 6.4 scrub starts
Nov 25 09:37:25 compute-1 ceph-mon[79643]: 5.1f scrub starts
Nov 25 09:37:25 compute-1 ceph-mon[79643]: 5.1f scrub ok
Nov 25 09:37:25 compute-1 ceph-mon[79643]: 6.4 scrub ok
Nov 25 09:37:25 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 25 09:37:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:37:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:25.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:37:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:25 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:25 compute-1 sudo[97895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nunopanootozanjyxokumxpivbavcdzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063445.5307386-279-45757630240987/AnsiballZ_dnf.py'
Nov 25 09:37:25 compute-1 sudo[97895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:25 compute-1 python3.9[97897]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:37:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:25.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:26 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 25 09:37:26 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 25 09:37:26 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Nov 25 09:37:26 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:26 compute-1 ceph-mon[79643]: 8.c scrub starts
Nov 25 09:37:26 compute-1 ceph-mon[79643]: 5.11 scrub starts
Nov 25 09:37:26 compute-1 ceph-mon[79643]: 5.11 scrub ok
Nov 25 09:37:26 compute-1 ceph-mon[79643]: 8.c scrub ok
Nov 25 09:37:26 compute-1 ceph-mon[79643]: 6.6 scrub starts
Nov 25 09:37:26 compute-1 ceph-mon[79643]: 6.6 scrub ok
Nov 25 09:37:26 compute-1 ceph-mon[79643]: pgmap v158: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 25 09:37:26 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 25 09:37:26 compute-1 ceph-mon[79643]: osdmap e117: 3 total, 3 up, 3 in
Nov 25 09:37:26 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:26 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:26 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0003ae0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:26 compute-1 sudo[97895]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:27 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 25 09:37:27 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 25 09:37:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Nov 25 09:37:27 compute-1 ceph-mon[79643]: 8.b scrub starts
Nov 25 09:37:27 compute-1 ceph-mon[79643]: 8.b scrub ok
Nov 25 09:37:27 compute-1 ceph-mon[79643]: 11.1d scrub starts
Nov 25 09:37:27 compute-1 ceph-mon[79643]: 11.1d scrub ok
Nov 25 09:37:27 compute-1 ceph-mon[79643]: 6.0 scrub starts
Nov 25 09:37:27 compute-1 ceph-mon[79643]: 6.0 scrub ok
Nov 25 09:37:27 compute-1 ceph-mon[79643]: osdmap e118: 3 total, 3 up, 3 in
Nov 25 09:37:27 compute-1 ceph-mon[79643]: osdmap e119: 3 total, 3 up, 3 in
Nov 25 09:37:27 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.234051) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447234071, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 3416, "num_deletes": 251, "total_data_size": 7934234, "memory_usage": 8046816, "flush_reason": "Manual Compaction"}
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447243644, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 5110746, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7195, "largest_seqno": 10606, "table_properties": {"data_size": 5094817, "index_size": 10247, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4613, "raw_key_size": 42628, "raw_average_key_size": 23, "raw_value_size": 5059393, "raw_average_value_size": 2782, "num_data_blocks": 444, "num_entries": 1818, "num_filter_entries": 1818, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063323, "oldest_key_time": 1764063323, "file_creation_time": 1764063447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 9617 microseconds, and 6620 cpu microseconds.
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.243668) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 5110746 bytes OK
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.243679) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.243976) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.243987) EVENT_LOG_v1 {"time_micros": 1764063447243984, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.243997) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 7917163, prev total WAL file size 7917163, number of live WAL files 2.
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.244950) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(4990KB)], [18(12MB)]
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447244965, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 17799114, "oldest_snapshot_seqno": -1}
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3922 keys, 13861659 bytes, temperature: kUnknown
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447275922, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 13861659, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13829791, "index_size": 20942, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9861, "raw_key_size": 99665, "raw_average_key_size": 25, "raw_value_size": 13752322, "raw_average_value_size": 3506, "num_data_blocks": 906, "num_entries": 3922, "num_filter_entries": 3922, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764063447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.276147) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 13861659 bytes
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.276763) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 572.4 rd, 445.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.9, 12.1 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(6.2) write-amplify(2.7) OK, records in: 4452, records dropped: 530 output_compression: NoCompression
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.276786) EVENT_LOG_v1 {"time_micros": 1764063447276771, "job": 8, "event": "compaction_finished", "compaction_time_micros": 31093, "compaction_time_cpu_micros": 19563, "output_level": 6, "num_output_files": 1, "total_output_size": 13861659, "num_input_records": 4452, "num_output_records": 3922, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447277748, "job": 8, "event": "table_file_deletion", "file_number": 20}
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447279399, "job": 8, "event": "table_file_deletion", "file_number": 18}
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.244922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.279516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.279518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.279519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.279520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:37:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:37:27.279521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:37:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:27.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:27 compute-1 sudo[98049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvgfcgmydhcdcjzqbqtlnpnprkfsqwzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063447.0313382-303-38219674065072/AnsiballZ_command.py'
Nov 25 09:37:27 compute-1 sudo[98049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:27 compute-1 python3.9[98051]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:37:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:27 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0003ae0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Nov 25 09:37:27 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120 pruub=15.508149147s) [1] async=[1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 42'1151 active pruub 314.354034424s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:27 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120 pruub=15.507986069s) [1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 314.354034424s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/093727 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:37:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:27.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:28 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 25 09:37:28 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 25 09:37:28 compute-1 sudo[98049]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:28 compute-1 ceph-mon[79643]: 6.b scrub starts
Nov 25 09:37:28 compute-1 ceph-mon[79643]: 6.b scrub ok
Nov 25 09:37:28 compute-1 ceph-mon[79643]: 6.1 scrub starts
Nov 25 09:37:28 compute-1 ceph-mon[79643]: 8.19 scrub starts
Nov 25 09:37:28 compute-1 ceph-mon[79643]: 6.1 scrub ok
Nov 25 09:37:28 compute-1 ceph-mon[79643]: 8.19 scrub ok
Nov 25 09:37:28 compute-1 ceph-mon[79643]: pgmap v161: 337 pgs: 1 unknown, 1 active+remapped, 335 active+clean; 459 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:28 compute-1 ceph-mon[79643]: osdmap e120: 3 total, 3 up, 3 in
Nov 25 09:37:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:28 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:28 compute-1 sudo[98336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jotjnbhllmorzxtropndibajwulbsbub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063448.234196-327-164070085685135/AnsiballZ_file.py'
Nov 25 09:37:28 compute-1 sudo[98336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:28 compute-1 python3.9[98338]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 09:37:28 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Nov 25 09:37:28 compute-1 sudo[98336]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:28 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:28 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 25 09:37:29 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 25 09:37:29 compute-1 ceph-mon[79643]: 6.9 scrub starts
Nov 25 09:37:29 compute-1 ceph-mon[79643]: 9.9 scrub starts
Nov 25 09:37:29 compute-1 ceph-mon[79643]: 6.3 scrub starts
Nov 25 09:37:29 compute-1 ceph-mon[79643]: 6.9 scrub ok
Nov 25 09:37:29 compute-1 ceph-mon[79643]: 6.3 scrub ok
Nov 25 09:37:29 compute-1 ceph-mon[79643]: 9.9 scrub ok
Nov 25 09:37:29 compute-1 ceph-mon[79643]: osdmap e121: 3 total, 3 up, 3 in
Nov 25 09:37:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:29.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:29 compute-1 python3.9[98489]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:37:29 compute-1 sudo[98641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzydvsincwpvjfvvgsffosvqjpeacbku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063449.5213108-375-149681260026875/AnsiballZ_dnf.py'
Nov 25 09:37:29 compute-1 sudo[98641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:29 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0003ae0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:29 compute-1 python3.9[98643]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:37:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:29.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:29 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 25 09:37:29 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 25 09:37:30 compute-1 ceph-mon[79643]: 9.17 scrub starts
Nov 25 09:37:30 compute-1 ceph-mon[79643]: 6.2 scrub starts
Nov 25 09:37:30 compute-1 ceph-mon[79643]: 9.17 scrub ok
Nov 25 09:37:30 compute-1 ceph-mon[79643]: 6.2 scrub ok
Nov 25 09:37:30 compute-1 ceph-mon[79643]: 6.c scrub starts
Nov 25 09:37:30 compute-1 ceph-mon[79643]: 6.c scrub ok
Nov 25 09:37:30 compute-1 ceph-mon[79643]: pgmap v165: 337 pgs: 1 unknown, 1 active+remapped, 335 active+clean; 459 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:37:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:30 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c7460 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:30 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:30 compute-1 sudo[98641]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:31 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 25 09:37:31 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 25 09:37:31 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Nov 25 09:37:31 compute-1 ceph-mon[79643]: 6.5 scrub starts
Nov 25 09:37:31 compute-1 ceph-mon[79643]: 9.16 scrub starts
Nov 25 09:37:31 compute-1 ceph-mon[79643]: 6.5 scrub ok
Nov 25 09:37:31 compute-1 ceph-mon[79643]: 9.16 scrub ok
Nov 25 09:37:31 compute-1 ceph-mon[79643]: 6.f scrub starts
Nov 25 09:37:31 compute-1 ceph-mon[79643]: 6.f scrub ok
Nov 25 09:37:31 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 25 09:37:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:31.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:31 compute-1 sudo[98795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecfcykzdxehfeikwvywrkogrhceltait ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063451.1886382-402-154933101374853/AnsiballZ_dnf.py'
Nov 25 09:37:31 compute-1 sudo[98795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:31 compute-1 python3.9[98797]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:37:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:31 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:37:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:31.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:37:31 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 25 09:37:32 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 25 09:37:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:32 compute-1 ceph-mon[79643]: 9.3 scrub starts
Nov 25 09:37:32 compute-1 ceph-mon[79643]: 6.a scrub starts
Nov 25 09:37:32 compute-1 ceph-mon[79643]: 6.a scrub ok
Nov 25 09:37:32 compute-1 ceph-mon[79643]: 9.3 scrub ok
Nov 25 09:37:32 compute-1 ceph-mon[79643]: 9.14 deep-scrub starts
Nov 25 09:37:32 compute-1 ceph-mon[79643]: 9.14 deep-scrub ok
Nov 25 09:37:32 compute-1 ceph-mon[79643]: pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:32 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 25 09:37:32 compute-1 ceph-mon[79643]: osdmap e122: 3 total, 3 up, 3 in
Nov 25 09:37:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Nov 25 09:37:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:32 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c8470 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:32 compute-1 sudo[98795]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:32 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0003ae0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:32 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 25 09:37:32 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 25 09:37:33 compute-1 sudo[98948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeszdkfjwxwvzhcttcvporizpzoikdee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063452.9903703-438-138623147907977/AnsiballZ_stat.py'
Nov 25 09:37:33 compute-1 sudo[98948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:33 compute-1 ceph-mon[79643]: 9.8 scrub starts
Nov 25 09:37:33 compute-1 ceph-mon[79643]: 6.7 scrub starts
Nov 25 09:37:33 compute-1 ceph-mon[79643]: 6.7 scrub ok
Nov 25 09:37:33 compute-1 ceph-mon[79643]: 9.8 scrub ok
Nov 25 09:37:33 compute-1 ceph-mon[79643]: 9.c scrub starts
Nov 25 09:37:33 compute-1 ceph-mon[79643]: 9.c scrub ok
Nov 25 09:37:33 compute-1 ceph-mon[79643]: osdmap e123: 3 total, 3 up, 3 in
Nov 25 09:37:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 25 09:37:33 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Nov 25 09:37:33 compute-1 python3.9[98950]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:37:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:33.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:33 compute-1 sudo[98948]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:33 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c8470 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:33 compute-1 sudo[99103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzzeslquqzvvblnomctncvojmdrvcjkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063453.5110958-462-220447782930085/AnsiballZ_slurp.py'
Nov 25 09:37:33 compute-1 sudo[99103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:33.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:33 compute-1 python3.9[99105]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 25 09:37:33 compute-1 sudo[99103]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:33 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 25 09:37:33 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 25 09:37:34 compute-1 ceph-mon[79643]: 6.d scrub starts
Nov 25 09:37:34 compute-1 ceph-mon[79643]: 6.d scrub ok
Nov 25 09:37:34 compute-1 ceph-mon[79643]: 9.b scrub starts
Nov 25 09:37:34 compute-1 ceph-mon[79643]: 9.b scrub ok
Nov 25 09:37:34 compute-1 ceph-mon[79643]: 9.2 scrub starts
Nov 25 09:37:34 compute-1 ceph-mon[79643]: 9.2 scrub ok
Nov 25 09:37:34 compute-1 ceph-mon[79643]: pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:34 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 25 09:37:34 compute-1 ceph-mon[79643]: osdmap e124: 3 total, 3 up, 3 in
Nov 25 09:37:34 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Nov 25 09:37:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:34 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:34 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de000a080 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:34 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 25 09:37:34 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 25 09:37:35 compute-1 sshd-session[96405]: Connection closed by 192.168.122.30 port 39070
Nov 25 09:37:35 compute-1 sshd-session[96402]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:37:35 compute-1 systemd[1]: session-37.scope: Deactivated successfully.
Nov 25 09:37:35 compute-1 systemd[1]: session-37.scope: Consumed 12.835s CPU time.
Nov 25 09:37:35 compute-1 systemd-logind[746]: Session 37 logged out. Waiting for processes to exit.
Nov 25 09:37:35 compute-1 systemd-logind[746]: Removed session 37.
Nov 25 09:37:35 compute-1 ceph-mon[79643]: 6.e scrub starts
Nov 25 09:37:35 compute-1 ceph-mon[79643]: 6.e scrub ok
Nov 25 09:37:35 compute-1 ceph-mon[79643]: 9.7 scrub starts
Nov 25 09:37:35 compute-1 ceph-mon[79643]: 9.7 scrub ok
Nov 25 09:37:35 compute-1 ceph-mon[79643]: 9.0 scrub starts
Nov 25 09:37:35 compute-1 ceph-mon[79643]: 9.0 scrub ok
Nov 25 09:37:35 compute-1 ceph-mon[79643]: osdmap e125: 3 total, 3 up, 3 in
Nov 25 09:37:35 compute-1 ceph-mon[79643]: 6.8 scrub starts
Nov 25 09:37:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 25 09:37:35 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Nov 25 09:37:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:35.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:35 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 126 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126 pruub=10.506638527s) [2] r=-1 lpr=126 pi=[90,126)/1 crt=42'1151 mlcod 0'0 active pruub 317.039733887s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:35 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 126 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126 pruub=10.506608963s) [2] r=-1 lpr=126 pi=[90,126)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 317.039733887s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:35 : epoch 69257854 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:37:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:35 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0003ae0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:35 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 25 09:37:35 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 25 09:37:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:35.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:36 compute-1 ceph-mon[79643]: 6.8 scrub ok
Nov 25 09:37:36 compute-1 ceph-mon[79643]: 9.5 scrub starts
Nov 25 09:37:36 compute-1 ceph-mon[79643]: 9.5 scrub ok
Nov 25 09:37:36 compute-1 ceph-mon[79643]: 9.1 scrub starts
Nov 25 09:37:36 compute-1 ceph-mon[79643]: 9.1 scrub ok
Nov 25 09:37:36 compute-1 ceph-mon[79643]: pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:36 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 25 09:37:36 compute-1 ceph-mon[79643]: osdmap e126: 3 total, 3 up, 3 in
Nov 25 09:37:36 compute-1 ceph-mon[79643]: 9.10 scrub starts
Nov 25 09:37:36 compute-1 ceph-mon[79643]: 9.10 scrub ok
Nov 25 09:37:36 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Nov 25 09:37:36 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:36 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:36 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0003ae0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:36 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:36 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 25 09:37:36 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 25 09:37:36 compute-1 sudo[99132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:37:36 compute-1 sudo[99132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:36 compute-1 sudo[99132]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:36 compute-1 sudo[99157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:37:36 compute-1 sudo[99157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Nov 25 09:37:37 compute-1 ceph-mon[79643]: 9.18 scrub starts
Nov 25 09:37:37 compute-1 ceph-mon[79643]: 9.18 scrub ok
Nov 25 09:37:37 compute-1 ceph-mon[79643]: 9.4 scrub starts
Nov 25 09:37:37 compute-1 ceph-mon[79643]: 9.4 scrub ok
Nov 25 09:37:37 compute-1 ceph-mon[79643]: osdmap e127: 3 total, 3 up, 3 in
Nov 25 09:37:37 compute-1 ceph-mon[79643]: 9.11 scrub starts
Nov 25 09:37:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:37:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:37.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:37:37 compute-1 sudo[99157]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:37 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:37 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:37:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:37:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:37.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:37:38 compute-1 ceph-mon[79643]: 9.13 deep-scrub starts
Nov 25 09:37:38 compute-1 ceph-mon[79643]: 9.11 scrub ok
Nov 25 09:37:38 compute-1 ceph-mon[79643]: 9.13 deep-scrub ok
Nov 25 09:37:38 compute-1 ceph-mon[79643]: 9.1c scrub starts
Nov 25 09:37:38 compute-1 ceph-mon[79643]: pgmap v175: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:38 compute-1 ceph-mon[79643]: 9.1c scrub ok
Nov 25 09:37:38 compute-1 ceph-mon[79643]: osdmap e128: 3 total, 3 up, 3 in
Nov 25 09:37:38 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Nov 25 09:37:38 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129 pruub=15.581406593s) [2] async=[2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 42'1151 active pruub 325.023925781s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:38 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129 pruub=15.581371307s) [2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 325.023925781s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:38 : epoch 69257854 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:37:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:38 : epoch 69257854 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:37:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:38 : epoch 69257854 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:37:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:38 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:38 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:39 compute-1 sudo[99212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:39 compute-1 sudo[99212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:39 compute-1 sudo[99212]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:39 compute-1 ceph-mon[79643]: 9.1a scrub starts
Nov 25 09:37:39 compute-1 ceph-mon[79643]: 9.1a scrub ok
Nov 25 09:37:39 compute-1 ceph-mon[79643]: osdmap e129: 3 total, 3 up, 3 in
Nov 25 09:37:39 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:39 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:39 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Nov 25 09:37:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:37:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:39.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:37:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:39 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:39.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:40 compute-1 ceph-mon[79643]: pgmap v178: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:40 compute-1 ceph-mon[79643]: 9.19 deep-scrub starts
Nov 25 09:37:40 compute-1 ceph-mon[79643]: 9.19 deep-scrub ok
Nov 25 09:37:40 compute-1 ceph-mon[79643]: osdmap e130: 3 total, 3 up, 3 in
Nov 25 09:37:40 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:37:40 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:37:40 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:40 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:40 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:37:40 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:37:40 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:37:40 compute-1 sshd-session[99238]: Accepted publickey for zuul from 192.168.122.30 port 47900 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:37:40 compute-1 systemd-logind[746]: New session 38 of user zuul.
Nov 25 09:37:40 compute-1 systemd[1]: Started Session 38 of User zuul.
Nov 25 09:37:40 compute-1 sshd-session[99238]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:37:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:40 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c7460 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:40 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0003ae0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:41 compute-1 python3.9[99391]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:37:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:41.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:41 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Nov 25 09:37:41 compute-1 ceph-mon[79643]: 9.1d scrub starts
Nov 25 09:37:41 compute-1 ceph-mon[79643]: 9.1d scrub ok
Nov 25 09:37:41 compute-1 ceph-mon[79643]: 9.1b deep-scrub starts
Nov 25 09:37:41 compute-1 ceph-mon[79643]: 9.1b deep-scrub ok
Nov 25 09:37:41 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 25 09:37:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:41 : epoch 69257854 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:37:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:41 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:41.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:42 compute-1 python3.9[99546]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:37:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:42 compute-1 ceph-mon[79643]: pgmap v180: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:42 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 25 09:37:42 compute-1 ceph-mon[79643]: osdmap e131: 3 total, 3 up, 3 in
Nov 25 09:37:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:42 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:42 compute-1 sudo[99666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:37:42 compute-1 sudo[99666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:42 compute-1 sudo[99666]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:42 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c7460 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:42 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 25 09:37:43 compute-1 python3.9[99764]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:37:43 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 25 09:37:43 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131 pruub=11.598611832s) [1] r=-1 lpr=131 pi=[72,131)/1 crt=42'1151 mlcod 0'0 active pruub 325.833038330s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:43 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131 pruub=11.598580360s) [1] r=-1 lpr=131 pi=[72,131)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 325.833038330s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:37:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:43.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:37:43 compute-1 sshd-session[99241]: Connection closed by 192.168.122.30 port 47900
Nov 25 09:37:43 compute-1 sshd-session[99238]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:37:43 compute-1 systemd[1]: session-38.scope: Deactivated successfully.
Nov 25 09:37:43 compute-1 systemd[1]: session-38.scope: Consumed 1.548s CPU time.
Nov 25 09:37:43 compute-1 systemd-logind[746]: Session 38 logged out. Waiting for processes to exit.
Nov 25 09:37:43 compute-1 systemd-logind[746]: Removed session 38.
Nov 25 09:37:43 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:43 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:43 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:37:43 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Nov 25 09:37:43 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:43 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:43 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132 pruub=12.311533928s) [1] r=-1 lpr=132 pi=[95,132)/1 crt=42'1151 mlcod 0'0 active pruub 327.095214844s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:43 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132 pruub=12.311475754s) [1] r=-1 lpr=132 pi=[95,132)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 327.095214844s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:43 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0003ae0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:43.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:43 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 25 09:37:43 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 25 09:37:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:44 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:44 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Nov 25 09:37:44 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:44 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:44 compute-1 ceph-mon[79643]: 9.12 scrub starts
Nov 25 09:37:44 compute-1 ceph-mon[79643]: 9.12 scrub ok
Nov 25 09:37:44 compute-1 ceph-mon[79643]: pgmap v182: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:37:44 compute-1 ceph-mon[79643]: osdmap e132: 3 total, 3 up, 3 in
Nov 25 09:37:44 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:37:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:44 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:44 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 25 09:37:44 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 25 09:37:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:45.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:45 compute-1 ceph-mon[79643]: 9.15 scrub starts
Nov 25 09:37:45 compute-1 ceph-mon[79643]: 9.15 scrub ok
Nov 25 09:37:45 compute-1 ceph-mon[79643]: osdmap e133: 3 total, 3 up, 3 in
Nov 25 09:37:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:37:45 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Nov 25 09:37:45 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134 pruub=14.988237381s) [1] async=[1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 42'1151 active pruub 331.791442871s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:45 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134 pruub=14.988134384s) [1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 331.791442871s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:45 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:37:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:45 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:45 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 25 09:37:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:37:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:45.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:37:45 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 25 09:37:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:46 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dec004760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:46 compute-1 ceph-mon[79643]: 9.d scrub starts
Nov 25 09:37:46 compute-1 ceph-mon[79643]: 9.d scrub ok
Nov 25 09:37:46 compute-1 ceph-mon[79643]: pgmap v185: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:46 compute-1 ceph-mon[79643]: osdmap e134: 3 total, 3 up, 3 in
Nov 25 09:37:46 compute-1 ceph-mon[79643]: 9.f scrub starts
Nov 25 09:37:46 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Nov 25 09:37:46 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135 pruub=14.994965553s) [1] async=[1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 42'1151 active pruub 332.806915283s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:46 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135 pruub=14.994921684s) [1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 332.806915283s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:46 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:46 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.a deep-scrub starts
Nov 25 09:37:46 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.a deep-scrub ok
Nov 25 09:37:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:47.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Nov 25 09:37:47 compute-1 ceph-mon[79643]: 9.f scrub ok
Nov 25 09:37:47 compute-1 ceph-mon[79643]: osdmap e135: 3 total, 3 up, 3 in
Nov 25 09:37:47 compute-1 ceph-mon[79643]: 9.a deep-scrub starts
Nov 25 09:37:47 compute-1 ceph-mon[79643]: 9.1e scrub starts
Nov 25 09:37:47 compute-1 ceph-mon[79643]: 9.1e scrub ok
Nov 25 09:37:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:47 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/093747 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:37:47 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.e deep-scrub starts
Nov 25 09:37:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:37:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:47.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:37:47 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.e deep-scrub ok
Nov 25 09:37:48 compute-1 sshd-session[99796]: Accepted publickey for zuul from 192.168.122.30 port 47912 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:37:48 compute-1 systemd-logind[746]: New session 39 of user zuul.
Nov 25 09:37:48 compute-1 systemd[1]: Started Session 39 of User zuul.
Nov 25 09:37:48 compute-1 sshd-session[99796]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:37:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:48 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:48 compute-1 ceph-mon[79643]: 9.a deep-scrub ok
Nov 25 09:37:48 compute-1 ceph-mon[79643]: pgmap v188: 337 pgs: 1 active+remapped, 1 peering, 1 active+clean+scrubbing, 334 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:48 compute-1 ceph-mon[79643]: osdmap e136: 3 total, 3 up, 3 in
Nov 25 09:37:48 compute-1 ceph-mon[79643]: 9.1f scrub starts
Nov 25 09:37:48 compute-1 ceph-mon[79643]: 9.1f scrub ok
Nov 25 09:37:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:48 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dec005260 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:48 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 25 09:37:48 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 25 09:37:49 compute-1 python3.9[99949]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:37:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:49.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:49 compute-1 ceph-mon[79643]: 9.e deep-scrub starts
Nov 25 09:37:49 compute-1 ceph-mon[79643]: 9.e deep-scrub ok
Nov 25 09:37:49 compute-1 ceph-mon[79643]: 9.6 scrub starts
Nov 25 09:37:49 compute-1 ceph-mon[79643]: 9.6 scrub ok
Nov 25 09:37:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:49 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c7460 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:49.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:50 compute-1 python3.9[100104]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:37:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:50 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:50 compute-1 sudo[100258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnwxuaotzfkbviwvbnzpuowwsgstwoax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063470.4416518-81-48235887543725/AnsiballZ_setup.py'
Nov 25 09:37:50 compute-1 sudo[100258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:50 compute-1 ceph-mon[79643]: pgmap v190: 337 pgs: 1 active+remapped, 1 peering, 1 active+clean+scrubbing, 334 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:50 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:50 compute-1 python3.9[100260]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:37:51 compute-1 sudo[100258]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:51.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:51 compute-1 sudo[100343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahinykrcwxmxxomtokolfcioncftwlkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063470.4416518-81-48235887543725/AnsiballZ_dnf.py'
Nov 25 09:37:51 compute-1 sudo[100343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:51 compute-1 python3.9[100345]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:37:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:51 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:51.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:52 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ddc0c7460 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:52 compute-1 sudo[100343]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:52 compute-1 ceph-mon[79643]: pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:52 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9de0009c80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:52 compute-1 sudo[100496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihvvtmhuwitcrikispnsglffgfbvbqsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063472.740319-117-236964659933606/AnsiballZ_setup.py'
Nov 25 09:37:52 compute-1 sudo[100496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:53 compute-1 python3.9[100498]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:37:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:53.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:53 compute-1 sudo[100496]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:53 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dec005260 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:53.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:53 compute-1 sudo[100692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gthkvxidekdbeizabwnhyuktuynesshy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063473.6766505-150-203262163371190/AnsiballZ_file.py'
Nov 25 09:37:53 compute-1 sudo[100692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:54 compute-1 python3.9[100694]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:37:54 compute-1 sudo[100692]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[88165]: 25/11/2025 09:37:54 : epoch 69257854 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dbc001320 fd 49 proxy ignored for local
Nov 25 09:37:54 compute-1 kernel: ganesha.nfsd[96400]: segfault at 50 ip 00007f9e65b4c32e sp 00007f9e2affc210 error 4 in libntirpc.so.5.8[7f9e65b31000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 25 09:37:54 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:37:54 compute-1 systemd[1]: Started Process Core Dump (PID 100772/UID 0).
Nov 25 09:37:54 compute-1 sudo[100846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjjbulyajppsbyakvycbecjshbqcxmdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063474.3283165-174-236716260891969/AnsiballZ_command.py'
Nov 25 09:37:54 compute-1 sudo[100846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:54 compute-1 ceph-mon[79643]: pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:54 compute-1 python3.9[100848]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:37:54 compute-1 sudo[100846]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:37:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:55.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:37:55 compute-1 sudo[101009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktdigejszxinzigesvurgszzlvmvwsfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063474.9982126-198-101844723601300/AnsiballZ_stat.py'
Nov 25 09:37:55 compute-1 sudo[101009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:55 compute-1 python3.9[101011]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:37:55 compute-1 sudo[101009]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:55 compute-1 systemd-coredump[100785]: Process 88174 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 66:
                                                    #0  0x00007f9e65b4c32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:37:55 compute-1 systemd[1]: systemd-coredump@1-100772-0.service: Deactivated successfully.
Nov 25 09:37:55 compute-1 systemd[1]: systemd-coredump@1-100772-0.service: Consumed 1.028s CPU time.
Nov 25 09:37:55 compute-1 podman[101042]: 2025-11-25 09:37:55.635404646 +0000 UTC m=+0.018058499 container died 97b4060ec7cbc126fd630094c624ea203d3cf63dbbedc2dd06eeb5cecf3a3665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 25 09:37:55 compute-1 systemd[1]: var-lib-containers-storage-overlay-00f2cd295f78725ebf252dc2f8b975e5882fc23e94b08a4e57f1b9ed19297003-merged.mount: Deactivated successfully.
Nov 25 09:37:55 compute-1 podman[101042]: 2025-11-25 09:37:55.655680764 +0000 UTC m=+0.038334617 container remove 97b4060ec7cbc126fd630094c624ea203d3cf63dbbedc2dd06eeb5cecf3a3665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 25 09:37:55 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:37:55 compute-1 sudo[101110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnslhyituecbumbhafhpurtoidyrvegc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063474.9982126-198-101844723601300/AnsiballZ_file.py'
Nov 25 09:37:55 compute-1 sudo[101110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:55 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Failed with result 'exit-code'.
Nov 25 09:37:55 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.142s CPU time.
Nov 25 09:37:55 compute-1 python3.9[101114]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:37:55 compute-1 sudo[101110]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:55.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:56 compute-1 sudo[101275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbeprtaizqtgcrsgidixqqqlanfsxqni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063476.014772-234-260388879322247/AnsiballZ_stat.py'
Nov 25 09:37:56 compute-1 sudo[101275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:56 compute-1 python3.9[101277]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:37:56 compute-1 sudo[101275]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:56 compute-1 sudo[101353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghiylpnswwarokithflmwjqrrpipgcaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063476.014772-234-260388879322247/AnsiballZ_file.py'
Nov 25 09:37:56 compute-1 sudo[101353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:56 compute-1 python3.9[101355]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:37:56 compute-1 sudo[101353]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:56 compute-1 ceph-mon[79643]: pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:57 compute-1 sudo[101506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkazmuesgmkhxjkfuexpzoljdneffygr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063476.9514062-273-215179086571810/AnsiballZ_ini_file.py'
Nov 25 09:37:57 compute-1 sudo[101506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:57.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:57 compute-1 python3.9[101508]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:37:57 compute-1 sudo[101506]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:57 compute-1 sudo[101658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iklekfuxhxbwjztmwsusxigiatptraab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063477.5160224-273-233287710870057/AnsiballZ_ini_file.py'
Nov 25 09:37:57 compute-1 sudo[101658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:57 compute-1 ceph-mon[79643]: pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:57 compute-1 python3.9[101660]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:37:57 compute-1 sudo[101658]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:57.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:58 compute-1 sudo[101810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elahzqotrvgefhctnuvpfuazlwbkujbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063477.9589767-273-125842972045418/AnsiballZ_ini_file.py'
Nov 25 09:37:58 compute-1 sudo[101810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:58 compute-1 python3.9[101812]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:37:58 compute-1 sudo[101810]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:58 compute-1 sudo[101962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqinxsfxntioriffbkokomfbfstqihvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063478.3988903-273-216780405386032/AnsiballZ_ini_file.py'
Nov 25 09:37:58 compute-1 sudo[101962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:58 compute-1 python3.9[101964]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:37:58 compute-1 sudo[101962]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:59 compute-1 sudo[102064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:59 compute-1 sudo[102064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:59 compute-1 sudo[102064]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:59 compute-1 sudo[102140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kljlvioeritwzenuhxocqkmvfnlwkwja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063479.041514-366-28693547694344/AnsiballZ_dnf.py'
Nov 25 09:37:59 compute-1 sudo[102140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:59.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:59 compute-1 python3.9[102142]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:37:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:37:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:59.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:00 compute-1 ceph-mon[79643]: pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:38:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:38:00 compute-1 sudo[102140]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/093800 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:38:01 compute-1 sudo[102293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdsuamougyffzcndiunanqzhpqzljvyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063480.8428235-399-144258836171807/AnsiballZ_setup.py'
Nov 25 09:38:01 compute-1 sudo[102293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:01 compute-1 python3.9[102295]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:38:01 compute-1 sudo[102293]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:01.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:01 compute-1 sudo[102448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uifxhdzjeslagqrzkgquotacdhonzbgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063481.4685419-423-171963082257372/AnsiballZ_stat.py'
Nov 25 09:38:01 compute-1 sudo[102448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:01 compute-1 python3.9[102450]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:38:01 compute-1 sudo[102448]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:01.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:02 compute-1 sudo[102600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbepuinedmcqxbsxndzkbomitwiyxbpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063482.047869-450-9885655733750/AnsiballZ_stat.py'
Nov 25 09:38:02 compute-1 sudo[102600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:02 compute-1 ceph-mon[79643]: pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:38:02 compute-1 python3.9[102602]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:38:02 compute-1 sudo[102600]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:02 compute-1 sudo[102752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuvdntemzysgihnbsryzoonrxeldfeny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063482.6908002-480-144385450099140/AnsiballZ_command.py'
Nov 25 09:38:02 compute-1 sudo[102752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:03 compute-1 python3.9[102754]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:38:03 compute-1 sudo[102752]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:38:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:03.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:38:03 compute-1 sudo[102906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fctnifetuzsucniimdngxjtqqofjkucs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063483.3260815-510-256458345494814/AnsiballZ_service_facts.py'
Nov 25 09:38:03 compute-1 sudo[102906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:03 compute-1 python3.9[102908]: ansible-service_facts Invoked
Nov 25 09:38:03 compute-1 network[102925]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:38:03 compute-1 network[102926]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:38:03 compute-1 network[102927]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:38:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:03.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:04 compute-1 ceph-mon[79643]: pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:05.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:05 compute-1 sudo[102906]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:05.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:05 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Scheduled restart job, restart counter is at 2.
Nov 25 09:38:05 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:38:05 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.142s CPU time.
Nov 25 09:38:05 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:38:06 compute-1 podman[103102]: 2025-11-25 09:38:06.123138689 +0000 UTC m=+0.026614386 container create f79e1654075b660104ebd7026ded15337882cfa104df9d40982b65afb70ac2b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:38:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5c4c9245584020db564b42cd11ad45e4cd1b7946e29a1201e68cfac609d1dd9/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5c4c9245584020db564b42cd11ad45e4cd1b7946e29a1201e68cfac609d1dd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5c4c9245584020db564b42cd11ad45e4cd1b7946e29a1201e68cfac609d1dd9/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5c4c9245584020db564b42cd11ad45e4cd1b7946e29a1201e68cfac609d1dd9/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.yfzsxe-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:06 compute-1 podman[103102]: 2025-11-25 09:38:06.16034052 +0000 UTC m=+0.063816226 container init f79e1654075b660104ebd7026ded15337882cfa104df9d40982b65afb70ac2b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 25 09:38:06 compute-1 podman[103102]: 2025-11-25 09:38:06.16699928 +0000 UTC m=+0.070474976 container start f79e1654075b660104ebd7026ded15337882cfa104df9d40982b65afb70ac2b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:38:06 compute-1 bash[103102]: f79e1654075b660104ebd7026ded15337882cfa104df9d40982b65afb70ac2b9
Nov 25 09:38:06 compute-1 podman[103102]: 2025-11-25 09:38:06.111779479 +0000 UTC m=+0.015255195 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:38:06 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:38:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:06 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:38:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:06 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:38:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:06 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:38:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:06 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:38:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:06 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:38:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:06 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:38:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:06 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:38:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:06 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:38:06 compute-1 ceph-mon[79643]: pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:07.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:07 compute-1 sudo[103304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpulhevvmuirwawzhyqpwpdizsyhmpsp ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764063487.3724265-555-168400049265355/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764063487.3724265-555-168400049265355/args'
Nov 25 09:38:07 compute-1 sudo[103304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:07 compute-1 sudo[103304]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:07.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:08 compute-1 sudo[103471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiamlxjxltokgvxxcywvjommuxpbbwzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063487.9298515-588-87966031806913/AnsiballZ_dnf.py'
Nov 25 09:38:08 compute-1 sudo[103471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:08 compute-1 ceph-mon[79643]: pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:38:08 compute-1 python3.9[103473]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:38:09 compute-1 sudo[103471]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:09.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/093809 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:38:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:09.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:10 compute-1 ceph-mon[79643]: pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 25 09:38:10 compute-1 sudo[103625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrdromzstfvhwwwolhpjrgkuyenflgro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063489.814511-627-278283763974518/AnsiballZ_package_facts.py'
Nov 25 09:38:10 compute-1 sudo[103625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:10 compute-1 python3.9[103627]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 09:38:10 compute-1 sudo[103625]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:11.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:11 compute-1 sudo[103778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtoyfrzceuevyanjzqgjioxjwzwdlncw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063491.3531756-657-27055294206322/AnsiballZ_stat.py'
Nov 25 09:38:11 compute-1 sudo[103778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:11 compute-1 python3.9[103780]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:11 compute-1 sudo[103778]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:11 compute-1 sudo[103856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkdrwxgkbrrtwftueywnetzaxjeaedca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063491.3531756-657-27055294206322/AnsiballZ_file.py'
Nov 25 09:38:11 compute-1 sudo[103856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:11.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:12 compute-1 python3.9[103858]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:12 compute-1 sudo[103856]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:12 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:38:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:12 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:38:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:12 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 09:38:12 compute-1 ceph-mon[79643]: pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Nov 25 09:38:12 compute-1 sudo[104008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aasplljeiwcdjpyyovqmkxrodadtnyhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063492.334223-693-99546478886909/AnsiballZ_stat.py'
Nov 25 09:38:12 compute-1 sudo[104008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:12 compute-1 python3.9[104010]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:12 compute-1 sudo[104008]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:12 compute-1 sudo[104086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kndkxgbqnexkxjfsoronrceqhmppnsnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063492.334223-693-99546478886909/AnsiballZ_file.py'
Nov 25 09:38:12 compute-1 sudo[104086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:13 compute-1 python3.9[104088]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:13 compute-1 sudo[104086]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:13.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:13.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:14 compute-1 ceph-mon[79643]: pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Nov 25 09:38:14 compute-1 sudo[104239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kprccizrglgljliysguspitsuhbggyyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063493.9743323-749-161878337324300/AnsiballZ_lineinfile.py'
Nov 25 09:38:14 compute-1 sudo[104239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:14 compute-1 python3.9[104241]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:14 compute-1 sudo[104239]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:14 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:38:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:14 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:38:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:14 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:38:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:38:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:38:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:15.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:38:15 compute-1 sudo[104392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsuykwlcmskuxtioxmdgtidmmrfppopn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063495.4540427-793-203393647096421/AnsiballZ_setup.py'
Nov 25 09:38:15 compute-1 sudo[104392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:15 compute-1 python3.9[104394]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:38:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:15.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:16 compute-1 sudo[104392]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:16 compute-1 ceph-mon[79643]: pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Nov 25 09:38:16 compute-1 sudo[104476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tileveckkezuffuzeeyklsquakhmlsru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063495.4540427-793-203393647096421/AnsiballZ_systemd.py'
Nov 25 09:38:16 compute-1 sudo[104476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:16 compute-1 python3.9[104478]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:38:16 compute-1 sudo[104476]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:38:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:17.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:38:17 compute-1 sshd-session[99799]: Connection closed by 192.168.122.30 port 47912
Nov 25 09:38:17 compute-1 sshd-session[99796]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:38:17 compute-1 systemd[1]: session-39.scope: Deactivated successfully.
Nov 25 09:38:17 compute-1 systemd[1]: session-39.scope: Consumed 17.137s CPU time.
Nov 25 09:38:17 compute-1 systemd-logind[746]: Session 39 logged out. Waiting for processes to exit.
Nov 25 09:38:17 compute-1 systemd-logind[746]: Removed session 39.
Nov 25 09:38:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:17.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:18 compute-1 ceph-mon[79643]: pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:38:19 compute-1 sudo[104507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:38:19 compute-1 sudo[104507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:19 compute-1 sudo[104507]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:19.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:38:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:19.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:38:20 compute-1 ceph-mon[79643]: pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:38:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:38:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:21.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:21 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b20000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:21.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:22 compute-1 ceph-mon[79643]: pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:38:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:22 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b18001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:22 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b14001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:23 compute-1 sshd-session[104548]: Accepted publickey for zuul from 192.168.122.30 port 33136 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:38:23 compute-1 systemd-logind[746]: New session 40 of user zuul.
Nov 25 09:38:23 compute-1 systemd[1]: Started Session 40 of User zuul.
Nov 25 09:38:23 compute-1 sshd-session[104548]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:38:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:23.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:23 compute-1 sudo[104702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggtzqpsahdhelmowotzmpzcyvrknhnmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063503.1917272-27-224336028911994/AnsiballZ_file.py'
Nov 25 09:38:23 compute-1 sudo[104702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:23 compute-1 python3.9[104704]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:23 compute-1 sudo[104702]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:23 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c001900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:23.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:23 : epoch 692578fe : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:38:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:23 : epoch 692578fe : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:38:24 compute-1 sudo[104854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpqslepbdwkydolasdlphweznqhkwite ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063503.9215662-64-77080065515040/AnsiballZ_stat.py'
Nov 25 09:38:24 compute-1 sudo[104854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:24 compute-1 ceph-mon[79643]: pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Nov 25 09:38:24 compute-1 python3.9[104856]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:24 compute-1 sudo[104854]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/093824 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:38:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:24 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c001900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:24 compute-1 sudo[104932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aozmzuuqrdtbutijlqyssupeeaugbxbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063503.9215662-64-77080065515040/AnsiballZ_file.py'
Nov 25 09:38:24 compute-1 sudo[104932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:24 compute-1 python3.9[104934]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:24 compute-1 sudo[104932]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:24 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:25 compute-1 sshd-session[104551]: Connection closed by 192.168.122.30 port 33136
Nov 25 09:38:25 compute-1 sshd-session[104548]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:38:25 compute-1 systemd[1]: session-40.scope: Deactivated successfully.
Nov 25 09:38:25 compute-1 systemd[1]: session-40.scope: Consumed 1.241s CPU time.
Nov 25 09:38:25 compute-1 systemd-logind[746]: Session 40 logged out. Waiting for processes to exit.
Nov 25 09:38:25 compute-1 systemd-logind[746]: Removed session 40.
Nov 25 09:38:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:38:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:25.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:38:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:25 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b14002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:25.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:26 compute-1 ceph-mon[79643]: pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Nov 25 09:38:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:26 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c002c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:26 compute-1 sshd-session[71243]: Received disconnect from 192.168.26.191 port 38726:11: disconnected by user
Nov 25 09:38:26 compute-1 sshd-session[71243]: Disconnected from user zuul 192.168.26.191 port 38726
Nov 25 09:38:26 compute-1 sshd-session[71240]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:38:26 compute-1 systemd[1]: session-18.scope: Deactivated successfully.
Nov 25 09:38:26 compute-1 systemd[1]: session-18.scope: Consumed 6.286s CPU time.
Nov 25 09:38:26 compute-1 systemd-logind[746]: Session 18 logged out. Waiting for processes to exit.
Nov 25 09:38:26 compute-1 systemd-logind[746]: Removed session 18.
Nov 25 09:38:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:26 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c002c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:27 : epoch 692578fe : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:38:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:27.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:27 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:27.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:28 compute-1 ceph-mon[79643]: pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Nov 25 09:38:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:28 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b14002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:28 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c002c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:29.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:29 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c002c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/093829 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:38:29 compute-1 sshd-session[104962]: Accepted publickey for zuul from 192.168.122.30 port 55208 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:38:29 compute-1 systemd-logind[746]: New session 41 of user zuul.
Nov 25 09:38:29 compute-1 systemd[1]: Started Session 41 of User zuul.
Nov 25 09:38:29 compute-1 sshd-session[104962]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:38:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:29.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:30 compute-1 ceph-mon[79643]: pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:38:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:38:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:30 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:30 compute-1 python3.9[105115]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:38:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:30 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c004480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:31 compute-1 sudo[105270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phlxwaacfntpqzfgckwgkxeqqdsgispp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063511.0397205-60-116627603080191/AnsiballZ_file.py'
Nov 25 09:38:31 compute-1 sudo[105270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:31.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:31 compute-1 python3.9[105272]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:31 compute-1 sudo[105270]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:31 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c004480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:31.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:32 compute-1 sudo[105445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mekggnpkkxyhjkivyketbgrgfpsyzgat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063511.6735141-84-43494056858752/AnsiballZ_stat.py'
Nov 25 09:38:32 compute-1 sudo[105445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:32 compute-1 python3.9[105447]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:32 compute-1 sudo[105445]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:32 compute-1 ceph-mon[79643]: pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:38:32 compute-1 sudo[105523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzdwokxwqzbwnqanvqrviawgngqntywk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063511.6735141-84-43494056858752/AnsiballZ_file.py'
Nov 25 09:38:32 compute-1 sudo[105523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:32 compute-1 python3.9[105525]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.x53dirge recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:32 compute-1 sudo[105523]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:32 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c004480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:32 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:33 compute-1 sudo[105676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeqrgigseakveaipnohokuwcsgmdvjzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063513.0059948-144-242556866065203/AnsiballZ_stat.py'
Nov 25 09:38:33 compute-1 sudo[105676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:33 compute-1 python3.9[105678]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:33 compute-1 sudo[105676]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:33.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:33 compute-1 sudo[105754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydqxnbamimmoedptvtzlhnrtxgthpmze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063513.0059948-144-242556866065203/AnsiballZ_file.py'
Nov 25 09:38:33 compute-1 sudo[105754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:33 compute-1 python3.9[105756]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.ucdh1fpp recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:33 compute-1 sudo[105754]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:33 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:33.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:34 compute-1 sudo[105906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlhjfircslnuadvahgbzbnbcghnjkvas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063513.892717-183-275529859269630/AnsiballZ_file.py'
Nov 25 09:38:34 compute-1 sudo[105906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:34 compute-1 python3.9[105908]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:38:34 compute-1 sudo[105906]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:34 compute-1 ceph-mon[79643]: pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:38:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:34 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b140037c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:34 compute-1 sudo[106058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkjubkyanhbkhultvnirycaentevmefy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063514.3791754-207-66204969279623/AnsiballZ_stat.py'
Nov 25 09:38:34 compute-1 sudo[106058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:34 compute-1 python3.9[106060]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:34 compute-1 sudo[106058]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:34 compute-1 sudo[106136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdfghtuzedwhvijpgxlnfksubmvtuylm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063514.3791754-207-66204969279623/AnsiballZ_file.py'
Nov 25 09:38:34 compute-1 sudo[106136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:34 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c005580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:35 compute-1 python3.9[106138]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:38:35 compute-1 sudo[106136]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:35 compute-1 sudo[106289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhasgpzdabgtomqtveoryynjaoepnmzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063515.133622-207-239046707268837/AnsiballZ_stat.py'
Nov 25 09:38:35 compute-1 sudo[106289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:38:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:35.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:38:35 compute-1 python3.9[106291]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:35 compute-1 sudo[106289]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:35 compute-1 sudo[106367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yowttcipvoidwlpqqmfrapersrgloymc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063515.133622-207-239046707268837/AnsiballZ_file.py'
Nov 25 09:38:35 compute-1 sudo[106367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:35 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c005580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:35 compute-1 python3.9[106369]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:38:35 compute-1 sudo[106367]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:35.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:36 compute-1 sudo[106519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdkzuwkpzlzcbznzyptejnrmfshqhcdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063515.9393659-276-90045720742945/AnsiballZ_file.py'
Nov 25 09:38:36 compute-1 sudo[106519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:36 compute-1 python3.9[106521]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:36 compute-1 sudo[106519]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:36 compute-1 ceph-mon[79643]: pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:38:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:36 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:36 compute-1 sudo[106671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evdjivnnncgnrsknolebnfyokxxelzob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063516.405659-300-186743620171607/AnsiballZ_stat.py'
Nov 25 09:38:36 compute-1 sudo[106671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:36 compute-1 python3.9[106673]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:36 compute-1 sudo[106671]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:36 compute-1 sudo[106749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gagyjfaiitpumcifckwapayxwoirioze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063516.405659-300-186743620171607/AnsiballZ_file.py'
Nov 25 09:38:36 compute-1 sudo[106749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:36 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b140040e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:37 compute-1 python3.9[106751]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:37 compute-1 sudo[106749]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:37 compute-1 sudo[106902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxwzufvjpibrzqtfzicihkkegxejccnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063517.2061317-336-269595502557286/AnsiballZ_stat.py'
Nov 25 09:38:37 compute-1 sudo[106902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:37.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:37 compute-1 python3.9[106904]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:37 compute-1 sudo[106902]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:37 compute-1 sudo[106980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlsjqlqbsgefbhxelhyfpbjvdusiqodf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063517.2061317-336-269595502557286/AnsiballZ_file.py'
Nov 25 09:38:37 compute-1 sudo[106980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:37 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c005580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:37 compute-1 python3.9[106982]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:37 compute-1 sudo[106980]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:37.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:38 compute-1 ceph-mon[79643]: pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:38:38 compute-1 sudo[107132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqzcfnlgzqxuphpnmzfmssguzjinihnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063518.0108893-372-207802856491245/AnsiballZ_systemd.py'
Nov 25 09:38:38 compute-1 sudo[107132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:38 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c005580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:38 compute-1 python3.9[107134]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:38:38 compute-1 systemd[1]: Reloading.
Nov 25 09:38:38 compute-1 systemd-rc-local-generator[107159]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:38:38 compute-1 systemd-sysv-generator[107163]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:38:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:38 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:39 compute-1 sudo[107132]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:39 compute-1 sudo[107269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:38:39 compute-1 sudo[107269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:39 compute-1 sudo[107269]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:39 compute-1 sudo[107347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uztliplynkakzzjbzfcbrkmvbqzcaggw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063519.1510088-396-215399677266186/AnsiballZ_stat.py'
Nov 25 09:38:39 compute-1 sudo[107347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:39.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:39 compute-1 python3.9[107349]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:39 compute-1 sudo[107347]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:39 compute-1 sudo[107425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qthoqszeafjguermoydriudigtqaznjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063519.1510088-396-215399677266186/AnsiballZ_file.py'
Nov 25 09:38:39 compute-1 sudo[107425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:39 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b140040e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:39 compute-1 python3.9[107427]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:39 compute-1 sudo[107425]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:39.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:40 compute-1 sudo[107577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svabvvikgcprvysdvriwjnereemadwgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063519.985524-432-219559967707648/AnsiballZ_stat.py'
Nov 25 09:38:40 compute-1 sudo[107577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:40 compute-1 python3.9[107579]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:40 compute-1 sudo[107577]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:40 compute-1 ceph-mon[79643]: pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:38:40 compute-1 sudo[107655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymtdbokkwjtrcvtnpciumlslqefhnqgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063519.985524-432-219559967707648/AnsiballZ_file.py'
Nov 25 09:38:40 compute-1 sudo[107655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:40 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:40 compute-1 python3.9[107657]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:40 compute-1 sudo[107655]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:40 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:40 compute-1 sudo[107807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhuqknkzdkxdzdvsqqzzglrwdoepfwkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063520.7820034-468-96532771324935/AnsiballZ_systemd.py'
Nov 25 09:38:40 compute-1 sudo[107807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:41 compute-1 python3.9[107809]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:38:41 compute-1 systemd[1]: Reloading.
Nov 25 09:38:41 compute-1 systemd-sysv-generator[107834]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:38:41 compute-1 systemd-rc-local-generator[107831]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:38:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:41.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:41 compute-1 systemd[1]: Starting Create netns directory...
Nov 25 09:38:41 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 09:38:41 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 09:38:41 compute-1 systemd[1]: Finished Create netns directory.
Nov 25 09:38:41 compute-1 sudo[107807]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:41 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:41.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:42 compute-1 python3.9[108000]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:38:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:42 compute-1 network[108017]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:38:42 compute-1 network[108018]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:38:42 compute-1 network[108019]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:38:42 compute-1 ceph-mon[79643]: pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:38:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:42 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b140040e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:42 compute-1 sudo[108046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:38:42 compute-1 sudo[108046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:42 compute-1 sudo[108046]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:42 compute-1 sudo[108076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:38:42 compute-1 sudo[108076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:42 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:43 compute-1 sudo[108076]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:43.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:43 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:43.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:44 compute-1 ceph-mon[79643]: pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:44 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:44 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b140051e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:38:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:38:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:45.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:38:45 compute-1 sudo[108360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxdnbmodnkgoqmiznnjhrgxeilxazzpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063525.5533938-546-233721125769305/AnsiballZ_stat.py'
Nov 25 09:38:45 compute-1 sudo[108360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:45 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:45 compute-1 python3.9[108362]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:45 compute-1 sudo[108360]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:45.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:46 compute-1 sudo[108438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aibamqgxtvsknddtbgnlzirwtwfuzoml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063525.5533938-546-233721125769305/AnsiballZ_file.py'
Nov 25 09:38:46 compute-1 sudo[108438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:46 compute-1 python3.9[108440]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:46 compute-1 sudo[108438]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:46 compute-1 ceph-mon[79643]: pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:46 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:38:46 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:38:46 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:46 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:46 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:38:46 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:38:46 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:38:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:46 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:46 compute-1 sudo[108590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqmugbxhdbkzxemxnisfolipxpaopmuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063526.4329877-585-144254033494450/AnsiballZ_file.py'
Nov 25 09:38:46 compute-1 sudo[108590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:46 compute-1 python3.9[108592]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:46 compute-1 sudo[108590]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:46 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:47 compute-1 sudo[108742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlsqbeovfvhscavyxyyfxrzccwjjxlmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063526.90272-609-120202823989242/AnsiballZ_stat.py'
Nov 25 09:38:47 compute-1 sudo[108742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:47 compute-1 python3.9[108744]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:47 compute-1 sudo[108742]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:47 compute-1 sudo[108821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egveizcbuoiqleomhulprltkdqtibspc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063526.90272-609-120202823989242/AnsiballZ_file.py'
Nov 25 09:38:47 compute-1 sudo[108821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:47.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:47 compute-1 python3.9[108823]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:47 compute-1 sudo[108821]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:47 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b140051e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:47.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:48 compute-1 sudo[108973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhhgvujfxidzvdcxnylbjxqknlxipmkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063527.9217186-654-150648894772149/AnsiballZ_timezone.py'
Nov 25 09:38:48 compute-1 sudo[108973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:48 compute-1 python3.9[108975]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 09:38:48 compute-1 systemd[1]: Starting Time & Date Service...
Nov 25 09:38:48 compute-1 ceph-mon[79643]: pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:48 compute-1 systemd[1]: Started Time & Date Service.
Nov 25 09:38:48 compute-1 sudo[108973]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:48 compute-1 sudo[108980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:38:48 compute-1 sudo[108980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:48 compute-1 sudo[108980]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:48 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:48 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:48 compute-1 sudo[109154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doysjxdvsfborgebnxfajvpowqtrnjtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063528.754974-681-43657940535597/AnsiballZ_file.py'
Nov 25 09:38:48 compute-1 sudo[109154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:49 compute-1 python3.9[109156]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:49 compute-1 sudo[109154]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:49.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:49 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:49 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:49 compute-1 sudo[109307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyntxozncyrqpogjdjkerovcrgxdlvmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063529.2414455-705-182303067292429/AnsiballZ_stat.py'
Nov 25 09:38:49 compute-1 sudo[109307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:49 compute-1 python3.9[109309]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:49 compute-1 sudo[109307]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:49 compute-1 sudo[109385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilbchmpaxhqluwnrvsaruyskpycfsoxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063529.2414455-705-182303067292429/AnsiballZ_file.py'
Nov 25 09:38:49 compute-1 sudo[109385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:49 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:49 compute-1 python3.9[109387]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:49 compute-1 sudo[109385]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:50.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:50 compute-1 sudo[109537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkptcadoznpvgvuaukudzgfvidzwbdds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063530.066082-741-111233066404039/AnsiballZ_stat.py'
Nov 25 09:38:50 compute-1 sudo[109537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:50 compute-1 python3.9[109539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:50 compute-1 sudo[109537]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:50 compute-1 ceph-mon[79643]: pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:50 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b140051e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:50 compute-1 sudo[109615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcoffbrbxeskzbvmjwzqypmbiziabzar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063530.066082-741-111233066404039/AnsiballZ_file.py'
Nov 25 09:38:50 compute-1 sudo[109615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:50 compute-1 python3.9[109617]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.zdf17guz recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:50 compute-1 sudo[109615]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:50 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:51 compute-1 sudo[109767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwrqimbfnfymvomipfwgbgdqdiwwrrbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063530.913258-777-147445601659862/AnsiballZ_stat.py'
Nov 25 09:38:51 compute-1 sudo[109767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:51 compute-1 python3.9[109769]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:51 compute-1 sudo[109767]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:51.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:51 compute-1 sudo[109846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrpjsdesvwcehijjakfvecwxbyvottaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063530.913258-777-147445601659862/AnsiballZ_file.py'
Nov 25 09:38:51 compute-1 sudo[109846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:51 compute-1 python3.9[109848]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:51 compute-1 sudo[109846]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:51 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:52.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:52 compute-1 sudo[109998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vecwwpfygeidjjhsshtjxiymymutmddg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063531.8926177-816-100736978407583/AnsiballZ_command.py'
Nov 25 09:38:52 compute-1 sudo[109998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:52 compute-1 python3.9[110000]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:38:52 compute-1 sudo[109998]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:52 compute-1 ceph-mon[79643]: pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:38:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:52 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:52 compute-1 sudo[110151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swjgemfuqhjwojjyyklddcmrnpcmreyd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764063532.5698197-840-146832351216637/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 09:38:52 compute-1 sudo[110151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:52 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:53 compute-1 python3[110153]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 09:38:53 compute-1 sudo[110151]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:53 compute-1 sudo[110306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-porzrpbqdujxyavfzkeqiiwuuuzfeqeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063533.208272-864-248782356001090/AnsiballZ_stat.py'
Nov 25 09:38:53 compute-1 sudo[110306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:53.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:53 compute-1 python3.9[110308]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:53 compute-1 sudo[110306]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:53 compute-1 sudo[110384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlqhvrwmpgoyzhgiyhazdnxksiykeseg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063533.208272-864-248782356001090/AnsiballZ_file.py'
Nov 25 09:38:53 compute-1 sudo[110384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:53 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:53 compute-1 python3.9[110386]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:53 compute-1 sudo[110384]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:54.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:54 compute-1 sudo[110536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ockqjaldndfkbagbvigjanqgthlqbzni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063534.1101558-900-155112348813570/AnsiballZ_stat.py'
Nov 25 09:38:54 compute-1 sudo[110536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:54 compute-1 ceph-mon[79643]: pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:54 compute-1 python3.9[110538]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:54 compute-1 sudo[110536]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:54 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b30003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:54 compute-1 sudo[110614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viisvpuqxxsubxkqxdgrockxalzjqcku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063534.1101558-900-155112348813570/AnsiballZ_file.py'
Nov 25 09:38:54 compute-1 sudo[110614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:54 compute-1 python3.9[110616]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:54 compute-1 sudo[110614]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:54 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b140051e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:55 compute-1 sudo[110766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odihzpmfbptgwsiyeczmyacjpghjsmcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063534.978344-936-101572110045813/AnsiballZ_stat.py'
Nov 25 09:38:55 compute-1 sudo[110766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:55 compute-1 python3.9[110769]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:55 compute-1 sudo[110766]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:55.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:55 compute-1 sudo[110845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxbhuaseqmotqbtwvbmroxocpjpjhwgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063534.978344-936-101572110045813/AnsiballZ_file.py'
Nov 25 09:38:55 compute-1 sudo[110845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:55 compute-1 python3.9[110847]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:55 compute-1 sudo[110845]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:55 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c002830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:56.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:56 compute-1 sudo[110997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hppsjrufijlqtzqnfitjelhcbnllgcfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063535.856798-972-64150312733099/AnsiballZ_stat.py'
Nov 25 09:38:56 compute-1 sudo[110997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:56 compute-1 python3.9[110999]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:56 compute-1 sudo[110997]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:56 compute-1 sudo[111075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skgiphfafqtqolyppowdwlmvzgsgxayt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063535.856798-972-64150312733099/AnsiballZ_file.py'
Nov 25 09:38:56 compute-1 sudo[111075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:56 compute-1 ceph-mon[79643]: pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:56 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:56 compute-1 python3.9[111077]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:56 compute-1 sudo[111075]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:56 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b30004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:56 compute-1 sudo[111227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggyifiabmgyylvahboerswbofyhqlbrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063536.7127612-1008-23240928321689/AnsiballZ_stat.py'
Nov 25 09:38:56 compute-1 sudo[111227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:57 compute-1 python3.9[111229]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:57 compute-1 sudo[111227]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:57 compute-1 sudo[111306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmibnxcrtiomjkmwomxszgpxjqkpsouq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063536.7127612-1008-23240928321689/AnsiballZ_file.py'
Nov 25 09:38:57 compute-1 sudo[111306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:57.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:57 compute-1 python3.9[111308]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:57 compute-1 sudo[111306]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:57 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b140051e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:57 compute-1 sudo[111458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-objhhpiyqwkbqlpeoqfrtmkkgxdcspys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063537.6831782-1047-172793045850612/AnsiballZ_command.py'
Nov 25 09:38:57 compute-1 sudo[111458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:58.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:58 compute-1 python3.9[111460]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:38:58 compute-1 sudo[111458]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:58 compute-1 ceph-mon[79643]: pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:58 compute-1 sudo[111613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcyestmrwkodfkiulfqdlfqssmpyksjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063538.192616-1071-76446523038767/AnsiballZ_blockinfile.py'
Nov 25 09:38:58 compute-1 sudo[111613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:58 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c003350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:58 compute-1 python3.9[111615]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:58 compute-1 sudo[111613]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:58 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:59 compute-1 sudo[111765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vimisorlbywlgwiloibsxchewqjgimuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063538.9276934-1098-18629135591334/AnsiballZ_file.py'
Nov 25 09:38:59 compute-1 sudo[111765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:59 compute-1 python3.9[111767]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:59 compute-1 sudo[111765]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:59 compute-1 sudo[111769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:38:59 compute-1 sudo[111769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:59 compute-1 sudo[111769]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:38:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:59.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:59 compute-1 sudo[111943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mozkuevhfuwktntcgopfapjsvwbuhnkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063539.3809571-1098-11127034540789/AnsiballZ_file.py'
Nov 25 09:38:59 compute-1 sudo[111943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:59 compute-1 python3.9[111945]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:59 compute-1 sudo[111943]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:38:59 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:00.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:00 compute-1 sudo[112095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxocwpcfqgyakhuzlakgcymajnhmnpwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063539.8812585-1143-233053088534366/AnsiballZ_mount.py'
Nov 25 09:39:00 compute-1 sudo[112095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:00 compute-1 python3.9[112097]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 09:39:00 compute-1 sudo[112095]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:00 compute-1 ceph-mon[79643]: pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:39:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:00 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:00 compute-1 sudo[112247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiztnsmujekgyxyuboexljarpbodsbqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063540.4795856-1143-32513220206915/AnsiballZ_mount.py'
Nov 25 09:39:00 compute-1 sudo[112247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:00 compute-1 python3.9[112249]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 09:39:00 compute-1 sudo[112247]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:00 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:01 compute-1 sshd-session[104965]: Connection closed by 192.168.122.30 port 55208
Nov 25 09:39:01 compute-1 sshd-session[104962]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:39:01 compute-1 systemd[1]: session-41.scope: Deactivated successfully.
Nov 25 09:39:01 compute-1 systemd[1]: session-41.scope: Consumed 20.634s CPU time.
Nov 25 09:39:01 compute-1 systemd-logind[746]: Session 41 logged out. Waiting for processes to exit.
Nov 25 09:39:01 compute-1 systemd-logind[746]: Removed session 41.
Nov 25 09:39:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:01.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:01 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c003350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:02.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:02 compute-1 ceph-mon[79643]: pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:39:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:02 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b140051e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:02 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b30004c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:03.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:03 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:04.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:04 compute-1 ceph-mon[79643]: pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:04 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:04 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/093905 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:39:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:05.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:05 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b30004c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:06.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:06 compute-1 sshd-session[112277]: Accepted publickey for zuul from 192.168.122.30 port 57428 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:39:06 compute-1 systemd-logind[746]: New session 42 of user zuul.
Nov 25 09:39:06 compute-1 systemd[1]: Started Session 42 of User zuul.
Nov 25 09:39:06 compute-1 sshd-session[112277]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:39:06 compute-1 ceph-mon[79643]: pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:06 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b140051e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:06 compute-1 sudo[112430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryskmyxkawpksrmwczwjpulsumsxadnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063546.5048497-19-31388401753073/AnsiballZ_tempfile.py'
Nov 25 09:39:06 compute-1 sudo[112430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:06 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b140051e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:06 compute-1 python3.9[112432]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 09:39:06 compute-1 sudo[112430]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:07.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:07 compute-1 sudo[112583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msceeaqisonjuwfrmsmynoxnnnjsgoqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063547.1443555-55-205892994558756/AnsiballZ_stat.py'
Nov 25 09:39:07 compute-1 sudo[112583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:07 compute-1 python3.9[112585]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:39:07 compute-1 sudo[112583]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:07 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:08.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:08 compute-1 sudo[112739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oabsqioftjklnyacnkbdfzomkytwsnfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063547.7541978-79-71202094907687/AnsiballZ_slurp.py'
Nov 25 09:39:08 compute-1 sudo[112739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:08 compute-1 python3.9[112741]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 25 09:39:08 compute-1 sudo[112739]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:08 compute-1 ceph-mon[79643]: pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:08 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:08 compute-1 sudo[112891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbtpjodkhkrlxcchvyoyvnmwiwhagdsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063548.419993-103-90816499221243/AnsiballZ_stat.py'
Nov 25 09:39:08 compute-1 sudo[112891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:08 compute-1 python3.9[112893]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.kwvwszme follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:08 compute-1 sudo[112891]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:08 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b38002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:09 compute-1 sudo[113016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voggaketrdjllpemeufshkmeuplitwri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063548.419993-103-90816499221243/AnsiballZ_copy.py'
Nov 25 09:39:09 compute-1 sudo[113016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:09 compute-1 python3.9[113018]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.kwvwszme mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063548.419993-103-90816499221243/.source.kwvwszme _original_basename=.o9cza3j8 follow=False checksum=719236507bdcc56ceb2be3ce1ef5008b5cfc2235 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:09 compute-1 sudo[113016]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:09.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:09 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:09 compute-1 sudo[113169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssyxklkpxkzqcrbfspamhepmovhvisnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063549.4648552-148-37763387837482/AnsiballZ_setup.py'
Nov 25 09:39:09 compute-1 sudo[113169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:10.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:10 compute-1 python3.9[113171]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:39:10 compute-1 sudo[113169]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:10 compute-1 ceph-mon[79643]: pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:10 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:10 compute-1 sudo[113321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxutkrjuhgryepeiprajwcgpqrwlnfug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063550.3251874-173-5463891449500/AnsiballZ_blockinfile.py'
Nov 25 09:39:10 compute-1 sudo[113321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:10 compute-1 python3.9[113323]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/QqShzRf5Fxs30q3tSf7IhrByfRVQwrs4CVW/gcd2Sdcp7tmVXVNFpJc8XlgTmWxcSLbFtAv0HgJOJ3p6/+g394nChAIaM55uhK/RLFqBZ/byiFqEjvN2LkEWuUVdvbZM808GhONJnWQtg70nn99jeLP34zkSD7gsU7cykxF7K7VyeBfeSiuOcyTjXvVfXr9TZxCZMrsb4eWFZAZ4QERXITlLcZthwc0kd17QWJWLo8Ssv4Qu0DtCHtqHO07s7Nz/CpSs0TX5jVM+C+2rAMn+aAZ4J25X8di4ABF5tO27d+ePazRlU5PWjb8n6kdy1B/cjHgvajXOoUPb5RjyVx2IgULBXaWsIRO23wp8YqiE1OdTly2+Nr5KiTPvR5yqq9C6aBNzS7YyUQc6Rf2RBAaLQbA36NJLGvPUWC7iYVtWdGoTfcTmzqkD2s3hzZl+zU2xNS0IpwByJsOJVIijtGFh1Y45uujq0WUJNPf1ayrY2Z/TV+iO/1iah3JArjyNiq8=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMPD1sScOy6Aiq5PZkl3KepHqJnvlMIZW4R0DzMl4b3w
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO/iVb5vehoW1eqrk4jdR3j25kacpoWkaPIq4PHAndTN4lXAEwSRab7iUqXkAAaYvUnrCJ86WUoAYGkII0QB5wA=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSE1VMIuB9MiQ17/QHDRAbfwrBNbTb+wZH1rCqeQvAxcHqZYp6TugJnyWX+nah5oDk8vz2PCIUW2lm/tVgP4Y2JHeaN2uMNgVnz1WtD6lCQORMYi1R+KpBgiAQoZAjAyC5Ugx5LWbDvrwtpt0zi2DEgCr2Zao5DG5UAaIcs7/Rj2LRx3hgA4jJ9xJKHVi5bUZfjIlWxLzVXVYT+dvUNrZoiVMBcaUMZRpU4tJ/76mE2jbqsfHEPFwHZ6ljoIegFbzNYoKYMCPK+DeOs/73xD4r/nzeQOK3IQzMOEEVaUYvceA+EPX4M+MrKfkNrJwf35qTOFJpb368gJsebA9uXjzPfzX/uh1atxLv5SihEzC5fHdiZ3BZ3wLEy0C7lvXyRBZdQx+anEYQnDepM/ThOT4YR2BNSCdRS2OpzeSJDS+o5CS++zCqWM4yI3lufZm8O8JqPEblV518196TSyMlAOzPbjEjrUaYGdljY5S2OzKA4PBJW4hW4RyBtjcZWJBpNlM=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBoG9NSSqw98oHfgpW8u+wJYHDhMiOjIhpCElLIROYdO
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHFL1noqwoCl3YzxWiRl0GcsDxYERT1o8e2TvLqUkxWuv8xj0oHuq7+GhcKu7HpiCls71ko7MDcOX4zteG544k4=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBYH+LEkGk38QCoX+uCPb3zHk7+XCeEWV22HpalqUrYF70U5Myra5/E2/v2kioqGNh5TR9q+A7kNO0JU78Ai+6UBv5aJlbEptu33E5t38qiAv3rpyypYwQ8PdWBl7OCeDcqz0EyYAZEw7rLbCWimqRhYsSXuUND+rRboiuI8DEX229oAgnRmIjyPJTTdKGiM3FTdl9YiSbYNyBykzJ8AugCfme4+hmds+8LJloh2aJjRJCs3/GvxdaGJcjBWAqN3Aurg+gPekKe4fwmOir2+KpqBDQE9YMfiBvraaCMGrDXkAjPdsycsvGMsWckhOgEW5qpTIt+ca5kcrK43ChAH5R/PpHlHnEYqw2o26BLmqIejfmXKRSxmH/Fq9Ldj3DMLJr4NTFBfJAl8wqsUKs6/0jngwOCYz6NLs7GgGZLMYv6wbRVgUpCc4ikQ8f1EDmXTdtqxef+QdmLTgWY1qCqe5lL8BcDDCjOTLJ6bbLUAdubY1z4vb6SFVcamH4SkSCFxs=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGHCQQOw3EbtZ2XAFA2gGrEnb7MaEAFwIJjyskket7pD
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFP8ctNKDLqIcODtgMol02WD/NgFM5ja/WeN20e07JH/Mz/Ge/v2/ybsY8LOtiyzixlX47XT8hWBR4IBwS2uvfM=
                                              create=True mode=0644 path=/tmp/ansible.kwvwszme state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:10 compute-1 sudo[113321]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:10 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c0043f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:11 compute-1 sudo[113474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngxjovainzoxcrusbjqojwkktzpzcycm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063550.9607463-197-105960639634653/AnsiballZ_command.py'
Nov 25 09:39:11 compute-1 sudo[113474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:11 compute-1 python3.9[113476]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.kwvwszme' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:39:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:11.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:11 compute-1 sudo[113474]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:11 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b38003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:11 compute-1 sudo[113628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acduccrdtrgngmkeaelsqsiclglelfsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063551.5927963-221-60438249249311/AnsiballZ_file.py'
Nov 25 09:39:11 compute-1 sudo[113628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:12.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:12 compute-1 python3.9[113630]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.kwvwszme state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:12 compute-1 sudo[113628]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:12 compute-1 sshd-session[112280]: Connection closed by 192.168.122.30 port 57428
Nov 25 09:39:12 compute-1 sshd-session[112277]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:39:12 compute-1 systemd[1]: session-42.scope: Deactivated successfully.
Nov 25 09:39:12 compute-1 systemd[1]: session-42.scope: Consumed 3.879s CPU time.
Nov 25 09:39:12 compute-1 systemd-logind[746]: Session 42 logged out. Waiting for processes to exit.
Nov 25 09:39:12 compute-1 systemd-logind[746]: Removed session 42.
Nov 25 09:39:12 compute-1 ceph-mon[79643]: pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:39:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:12 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:12 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:13.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:13 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c0043f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:14.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:14 : epoch 692578fe : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:39:14 compute-1 ceph-mon[79643]: pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:39:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:14 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b38003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:14 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:15.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:39:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:15 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:16.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:16 compute-1 ceph-mon[79643]: pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:39:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:16 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:16 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b38003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:17 : epoch 692578fe : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:39:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:17 : epoch 692578fe : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:39:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:17 compute-1 sshd-session[113658]: Accepted publickey for zuul from 192.168.122.30 port 37182 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:39:17 compute-1 systemd-logind[746]: New session 43 of user zuul.
Nov 25 09:39:17 compute-1 systemd[1]: Started Session 43 of User zuul.
Nov 25 09:39:17 compute-1 sshd-session[113658]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:39:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:17.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:17 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:18.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:18 compute-1 python3.9[113811]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:39:18 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 09:39:18 compute-1 ceph-mon[79643]: pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:39:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:18 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:18 compute-1 sudo[113967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztdjnqeofhytkmecbkvvkqabfpevxcna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063558.4936574-57-120457528755746/AnsiballZ_systemd.py'
Nov 25 09:39:18 compute-1 sudo[113967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:18 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:19 compute-1 python3.9[113969]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 09:39:19 compute-1 sudo[113967]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:19 compute-1 sudo[114010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:39:19 compute-1 sudo[114010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:19 compute-1 sudo[114010]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:19.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:19 compute-1 sudo[114147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcvttyfcokdlbxqxotysnyjyfckqjijw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063559.3506703-81-141018685740362/AnsiballZ_systemd.py'
Nov 25 09:39:19 compute-1 sudo[114147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:19 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b380045b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:19 compute-1 python3.9[114149]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:39:19 compute-1 sudo[114147]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:20.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:39:20 compute-1 sudo[114300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocqirpgtielahcsveunxgbtaptocmxin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063560.0613046-108-147964640965671/AnsiballZ_command.py'
Nov 25 09:39:20 compute-1 sudo[114300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:20 compute-1 ceph-mon[79643]: pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:39:20 compute-1 python3.9[114302]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:39:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:20 compute-1 sudo[114300]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:21 compute-1 sudo[114454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtcpqedfkljdixpbdtibsiiwlgkafxzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063560.7278173-132-56748189975198/AnsiballZ_stat.py'
Nov 25 09:39:21 compute-1 sudo[114454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:21 compute-1 python3.9[114456]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:39:21 compute-1 sudo[114454]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:21.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:21 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:21 compute-1 sudo[114606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqkjxjwuacamehqoevyjfmdgryvaalrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063561.576103-159-146958212370141/AnsiballZ_file.py'
Nov 25 09:39:21 compute-1 sudo[114606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:22 compute-1 python3.9[114608]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:22.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:22 compute-1 sudo[114606]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:22 compute-1 sshd-session[113661]: Connection closed by 192.168.122.30 port 37182
Nov 25 09:39:22 compute-1 sshd-session[113658]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:39:22 compute-1 systemd-logind[746]: Session 43 logged out. Waiting for processes to exit.
Nov 25 09:39:22 compute-1 systemd[1]: session-43.scope: Deactivated successfully.
Nov 25 09:39:22 compute-1 systemd[1]: session-43.scope: Consumed 2.967s CPU time.
Nov 25 09:39:22 compute-1 systemd-logind[746]: Removed session 43.
Nov 25 09:39:22 compute-1 ceph-mon[79643]: pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:39:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:22 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b380045b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:22 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:39:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:23.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:39:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:23 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:24.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:24 compute-1 ceph-mon[79643]: pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:39:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:24 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:24 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b380045b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/093925 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:39:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:25.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:25 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:26.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:26 compute-1 ceph-mon[79643]: pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:39:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:26 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:26 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b380045b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:27 compute-1 sshd-session[114636]: Accepted publickey for zuul from 192.168.122.30 port 48616 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:39:27 compute-1 systemd-logind[746]: New session 44 of user zuul.
Nov 25 09:39:27 compute-1 systemd[1]: Started Session 44 of User zuul.
Nov 25 09:39:27 compute-1 sshd-session[114636]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:39:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:27.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:27 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c005190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:28.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:28 compute-1 python3.9[114789]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:39:28 compute-1 ceph-mon[79643]: pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:39:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:28 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c005190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:28 compute-1 sudo[114943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrkajyobgmkrfqcevpkvyvcilcgbmnco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063568.5676677-63-275753310618641/AnsiballZ_setup.py'
Nov 25 09:39:28 compute-1 sudo[114943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:28 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:28 compute-1 python3.9[114945]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:39:29 compute-1 sudo[114943]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:29 compute-1 sudo[115028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrccueanvyfhucbjwvqcecoobvuyujgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063568.5676677-63-275753310618641/AnsiballZ_dnf.py'
Nov 25 09:39:29 compute-1 sudo[115028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:29.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:29 compute-1 python3.9[115030]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 09:39:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:29 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b380056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:30.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:30 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:30 compute-1 ceph-mon[79643]: pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:39:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:39:30 compute-1 sudo[115028]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:30 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:31 compute-1 python3.9[115181]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:39:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:31.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:31 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:32.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:32 compute-1 python3.9[115333]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 09:39:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:32 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b380056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:32 compute-1 rsyslogd[963]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:39:32 compute-1 rsyslogd[963]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:39:32 compute-1 ceph-mon[79643]: pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:39:32 compute-1 python3.9[115484]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:39:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:32 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:33 compute-1 python3.9[115635]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:39:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:33.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:33 compute-1 sshd-session[114639]: Connection closed by 192.168.122.30 port 48616
Nov 25 09:39:33 compute-1 sshd-session[114636]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:39:33 compute-1 systemd[1]: session-44.scope: Deactivated successfully.
Nov 25 09:39:33 compute-1 systemd[1]: session-44.scope: Consumed 4.209s CPU time.
Nov 25 09:39:33 compute-1 systemd-logind[746]: Session 44 logged out. Waiting for processes to exit.
Nov 25 09:39:33 compute-1 systemd-logind[746]: Removed session 44.
Nov 25 09:39:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:33 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c005190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:34.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:34 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c005190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:34 compute-1 ceph-mon[79643]: pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:39:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:34 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b380056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:35.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:35 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:36.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:36 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:36 compute-1 ceph-mon[79643]: pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:39:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:36 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:37.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:37 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b380056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:38.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:38 compute-1 sshd-session[115662]: Accepted publickey for zuul from 192.168.122.30 port 47790 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:39:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:38 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:38 compute-1 systemd-logind[746]: New session 45 of user zuul.
Nov 25 09:39:38 compute-1 systemd[1]: Started Session 45 of User zuul.
Nov 25 09:39:38 compute-1 sshd-session[115662]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:39:38 compute-1 ceph-mon[79643]: pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:39:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:38 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c005190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:39 compute-1 python3.9[115817]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:39:39 compute-1 sudo[115820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:39:39 compute-1 sudo[115820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:39 compute-1 sudo[115820]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:39.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:39 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c005190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:40.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:40 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b44002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:40 compute-1 ceph-mon[79643]: pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:40 compute-1 sudo[115997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wityziqunhmhvfovjqvebfpaktbvcgsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063580.3667111-111-78142988256103/AnsiballZ_file.py'
Nov 25 09:39:40 compute-1 sudo[115997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:40 compute-1 python3.9[115999]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:40 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:40 compute-1 sudo[115997]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:41 compute-1 sudo[116150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvqzfmlvkodtfdawbzmezxhptggyskfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063581.0607617-111-147771750923170/AnsiballZ_file.py'
Nov 25 09:39:41 compute-1 sudo[116150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:41 compute-1 python3.9[116152]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:41 compute-1 sudo[116150]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:41.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:41 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:41 compute-1 sudo[116302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqftuispnrofvrjdphenxzrmkuecjphu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063581.5555089-155-164004571272588/AnsiballZ_stat.py'
Nov 25 09:39:41 compute-1 sudo[116302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:42 compute-1 python3.9[116304]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:42 compute-1 sudo[116302]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:42.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:42 compute-1 sudo[116425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgjntngqyiurayiccbydfxnxxjuzcafy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063581.5555089-155-164004571272588/AnsiballZ_copy.py'
Nov 25 09:39:42 compute-1 sudo[116425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:42 compute-1 python3.9[116427]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063581.5555089-155-164004571272588/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=fa235c355361ff7f076094ad3db890688c362b66 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:42 compute-1 sudo[116425]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:42 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c005190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:42 compute-1 ceph-mon[79643]: pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:39:42 compute-1 sudo[116577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckmuazujquysjpmpqlcocoxuetzgypkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063582.6627116-155-235041417276238/AnsiballZ_stat.py'
Nov 25 09:39:42 compute-1 sudo[116577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:42 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b44005830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:42 compute-1 python3.9[116579]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:43 compute-1 sudo[116577]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:43 compute-1 sudo[116701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgyjbvebthsvsrqrjjmptftiwjroojqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063582.6627116-155-235041417276238/AnsiballZ_copy.py'
Nov 25 09:39:43 compute-1 sudo[116701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:43 compute-1 python3.9[116703]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063582.6627116-155-235041417276238/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=d372d1b4272dc98810d1b396448f10f5be8f829f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:43 compute-1 sudo[116701]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:43.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:43 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:43 compute-1 sudo[116853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptzwqefmmkahazzumhywesbwknkeczbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063583.6720707-155-169820080124110/AnsiballZ_stat.py'
Nov 25 09:39:43 compute-1 sudo[116853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:43 compute-1 python3.9[116855]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:44 compute-1 sudo[116853]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:44.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:44 compute-1 sudo[116976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txdiineuhprnsqklavqlgrmaymdrsysg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063583.6720707-155-169820080124110/AnsiballZ_copy.py'
Nov 25 09:39:44 compute-1 sudo[116976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:44 compute-1 python3.9[116978]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063583.6720707-155-169820080124110/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=b2ab9e9a0e3368e2be12e807db3a3ee7e215f973 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:44 compute-1 sudo[116976]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:44 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:44 compute-1 ceph-mon[79643]: pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:44 compute-1 sudo[117128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qafrhuoysplaampmxvjfulxjhlbmqphx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063584.5713444-285-205486198439751/AnsiballZ_file.py'
Nov 25 09:39:44 compute-1 sudo[117128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:44 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c005190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:44 compute-1 python3.9[117130]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:44 compute-1 sudo[117128]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:45 compute-1 sudo[117281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuwjupyouebkwtluuvylitioigqhdfdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063585.0867403-285-93895536883109/AnsiballZ_file.py'
Nov 25 09:39:45 compute-1 sudo[117281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:45 compute-1 python3.9[117283]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:45 compute-1 sudo[117281]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:45.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:39:45 compute-1 sudo[117433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-culsoqgewzxyxqstqiiyoffmlpvhqrsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063585.6075325-331-101482989659743/AnsiballZ_stat.py'
Nov 25 09:39:45 compute-1 sudo[117433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:45 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b44005830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:45 compute-1 python3.9[117435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:45 compute-1 sudo[117433]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:46.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:46 compute-1 sudo[117556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtbgpetmvihbxogmuudshjnrrhgeoymu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063585.6075325-331-101482989659743/AnsiballZ_copy.py'
Nov 25 09:39:46 compute-1 sudo[117556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:46 compute-1 python3.9[117558]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063585.6075325-331-101482989659743/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=1e9a695f94a17981990b8e753243b10ac1922ed9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:46 compute-1 sudo[117556]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:46 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:46 compute-1 sudo[117708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdeijwucvjyekgyqtgkwhpdhpukihhwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063586.4582314-331-258021775638954/AnsiballZ_stat.py'
Nov 25 09:39:46 compute-1 sudo[117708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:46 compute-1 ceph-mon[79643]: pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:46 compute-1 python3.9[117710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:46 compute-1 sudo[117708]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:46 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:47 compute-1 sudo[117831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-korbzfydfzxcqyjlteniwflzfhflhcpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063586.4582314-331-258021775638954/AnsiballZ_copy.py'
Nov 25 09:39:47 compute-1 sudo[117831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:47 compute-1 python3.9[117833]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063586.4582314-331-258021775638954/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=6089737efa6d9cfbc115be5d2d9f479510a3f2d8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:47 compute-1 sudo[117831]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:47.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:47 compute-1 sudo[117984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naxdqjbonrayexibmvmqazrjnrfbslic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063587.291523-331-252447017062940/AnsiballZ_stat.py'
Nov 25 09:39:47 compute-1 sudo[117984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:47 compute-1 python3.9[117986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:47 compute-1 sudo[117984]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:47 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:47 compute-1 sudo[118107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqgbxjbkyyarfdfapcstbybnircrvqpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063587.291523-331-252447017062940/AnsiballZ_copy.py'
Nov 25 09:39:47 compute-1 sudo[118107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:48 compute-1 python3.9[118109]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063587.291523-331-252447017062940/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=ced9b593cc56d9c13fb6b988e712e7d6cd5090b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:48 compute-1 sudo[118107]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:48.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:48 compute-1 sudo[118259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoehvrjfgqvmccxhkrsvpmzsobokwfkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063588.2484503-460-152064662570248/AnsiballZ_file.py'
Nov 25 09:39:48 compute-1 sudo[118259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:48 compute-1 python3.9[118261]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:48 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b44005830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:48 compute-1 sudo[118259]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:48 compute-1 sudo[118262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:39:48 compute-1 sudo[118262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:48 compute-1 sudo[118262]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:48 compute-1 ceph-mon[79643]: pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:39:48 compute-1 sudo[118311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:39:48 compute-1 sudo[118311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:48 compute-1 sudo[118468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbbqohnzzkwgepvfpvveydrvbmtwjiys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063588.6916096-460-210614645919940/AnsiballZ_file.py'
Nov 25 09:39:48 compute-1 sudo[118468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:48 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:49 compute-1 python3.9[118473]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:49 compute-1 sudo[118468]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:49 compute-1 sudo[118311]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/093949 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:39:49 compute-1 sudo[118643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkexdqecrraiightfxkhkipyyntwiafg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063589.208069-507-209404730068160/AnsiballZ_stat.py'
Nov 25 09:39:49 compute-1 sudo[118643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:49.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:49 compute-1 python3.9[118645]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:49 compute-1 sudo[118643]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:49 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:39:49 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:39:49 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:39:49 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:39:49 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:39:49 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:39:49 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:39:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:49 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:49 compute-1 sudo[118766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kidmmonuenzffbzqihwyltduhjhnvzai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063589.208069-507-209404730068160/AnsiballZ_copy.py'
Nov 25 09:39:49 compute-1 sudo[118766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:50 compute-1 python3.9[118768]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063589.208069-507-209404730068160/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=6cb9ef7455cfc141a1bec9148f5d138bdc16130f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:50 compute-1 sudo[118766]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:50.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:50 compute-1 sudo[118918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxngxazlemzwhrnxhotmcpcwukluhxyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063590.1631458-507-255676447566560/AnsiballZ_stat.py'
Nov 25 09:39:50 compute-1 sudo[118918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:50 compute-1 python3.9[118920]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:50 compute-1 sudo[118918]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:50 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:50 compute-1 ceph-mon[79643]: pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:50 compute-1 sudo[119043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-damcoycbpcumbkpfgbxkamloxmlmpihg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063590.1631458-507-255676447566560/AnsiballZ_copy.py'
Nov 25 09:39:50 compute-1 sudo[119043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:50 compute-1 python3.9[119045]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063590.1631458-507-255676447566560/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=6089737efa6d9cfbc115be5d2d9f479510a3f2d8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:50 compute-1 sudo[119043]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:50 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:51 compute-1 sudo[119196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uovehddcinuqmjdbrzemlwtyykbhupkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063591.0245829-507-50267303569512/AnsiballZ_stat.py'
Nov 25 09:39:51 compute-1 sudo[119196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:51 compute-1 python3.9[119198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:51 compute-1 sudo[119196]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:51.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:51 compute-1 sudo[119319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzuihuoqigtuwwufwpwdpivsbcxiewex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063591.0245829-507-50267303569512/AnsiballZ_copy.py'
Nov 25 09:39:51 compute-1 sudo[119319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:51 compute-1 python3.9[119321]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063591.0245829-507-50267303569512/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=b8f6ba1b4af7e30b75cf79303fc0f86a29885904 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:51 compute-1 sudo[119319]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:51 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:52.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:52 compute-1 sudo[119346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:39:52 compute-1 sudo[119346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:52 compute-1 sudo[119346]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:52 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b480023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:52 compute-1 ceph-mon[79643]: pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:39:52 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:39:52 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:39:52 compute-1 sudo[119496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibmrmguniywvwhvemesrawrqvomfsmqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063592.5263438-672-274370043544051/AnsiballZ_file.py'
Nov 25 09:39:52 compute-1 sudo[119496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:52 compute-1 python3.9[119498]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:52 compute-1 sudo[119496]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:52 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b54002630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:53 compute-1 sudo[119649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vshxdhtbugosdhuqcpizjjtzlovuywbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063593.1110096-705-38320224581105/AnsiballZ_stat.py'
Nov 25 09:39:53 compute-1 sudo[119649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:53 compute-1 python3.9[119651]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:53 compute-1 sudo[119649]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:39:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:53.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:39:53 compute-1 sudo[119772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkswiawvqjxfshhhfkhftdqbshckhszg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063593.1110096-705-38320224581105/AnsiballZ_copy.py'
Nov 25 09:39:53 compute-1 sudo[119772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:53 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:53 compute-1 python3.9[119774]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063593.1110096-705-38320224581105/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:53 compute-1 sudo[119772]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:54.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:54 compute-1 sudo[119924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pznlvzcxgafnhxlbzlakpvyiymcefxqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063594.0796711-755-268288273490085/AnsiballZ_file.py'
Nov 25 09:39:54 compute-1 sudo[119924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:54 compute-1 python3.9[119926]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:54 compute-1 sudo[119924]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:54 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:54 compute-1 ceph-mon[79643]: pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:54 compute-1 sudo[120076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpridqqqluyzuwwkcvnygsediyvkyelm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063594.57464-780-210523225255189/AnsiballZ_stat.py'
Nov 25 09:39:54 compute-1 sudo[120076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:54 compute-1 python3.9[120078]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:54 compute-1 sudo[120076]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:54 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b48002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:55 compute-1 sudo[120199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wipnvubwmwowayvlasqkylnkmhlvzitx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063594.57464-780-210523225255189/AnsiballZ_copy.py'
Nov 25 09:39:55 compute-1 sudo[120199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:55 compute-1 python3.9[120201]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063594.57464-780-210523225255189/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:55 compute-1 sudo[120199]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:55.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:55 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b54003000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:55 compute-1 sudo[120352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqhmpwqnrgvrxhjcbpbcgjgdxojwrdst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063595.6903293-832-104557299540626/AnsiballZ_file.py'
Nov 25 09:39:55 compute-1 sudo[120352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:56 compute-1 python3.9[120354]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:56 compute-1 sudo[120352]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:56.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:56 compute-1 sudo[120504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifavdxxvjxrmctgsajffkcvuozcorkqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063596.1799684-854-128910567327232/AnsiballZ_stat.py'
Nov 25 09:39:56 compute-1 sudo[120504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:56 compute-1 python3.9[120506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:56 compute-1 sudo[120504]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:56 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:56 compute-1 ceph-mon[79643]: pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:56 compute-1 sudo[120627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjqaredxgxvbnirlmiweojqhhyuqivyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063596.1799684-854-128910567327232/AnsiballZ_copy.py'
Nov 25 09:39:56 compute-1 sudo[120627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:56 compute-1 python3.9[120629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063596.1799684-854-128910567327232/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:56 compute-1 sudo[120627]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:56 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:57 compute-1 sudo[120780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxxmzejaqpkorcmndaoxfyfvugtjruye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063597.1117191-903-132005706815348/AnsiballZ_file.py'
Nov 25 09:39:57 compute-1 sudo[120780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:57 compute-1 python3.9[120782]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:57 compute-1 sudo[120780]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:57.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:57 compute-1 sudo[120932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgufzxbeuuxmlzowftejnufqgxiusjhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063597.6350498-928-38048006224797/AnsiballZ_stat.py'
Nov 25 09:39:57 compute-1 sudo[120932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:57 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b480040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:57 compute-1 python3.9[120934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:57 compute-1 sudo[120932]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:58.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:58 compute-1 sudo[121055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqidowfhktwrvmwtlqpetzwvavhtqsow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063597.6350498-928-38048006224797/AnsiballZ_copy.py'
Nov 25 09:39:58 compute-1 sudo[121055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:58 : epoch 692578fe : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:39:58 compute-1 python3.9[121057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063597.6350498-928-38048006224797/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:58 compute-1 sudo[121055]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:58 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b54003000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:58 compute-1 ceph-mon[79643]: pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:39:58 compute-1 sudo[121207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlebgslkrdslpiisehkhgpupotdonnai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063598.5981114-976-8122422470675/AnsiballZ_file.py'
Nov 25 09:39:58 compute-1 sudo[121207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:58 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:59 compute-1 python3.9[121209]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:59 compute-1 sudo[121207]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:59 compute-1 sudo[121360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zltqlhppnvmrhmvtrmxjpswywyyzmpdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063599.241291-1007-7430215030349/AnsiballZ_stat.py'
Nov 25 09:39:59 compute-1 sudo[121360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:59 compute-1 sudo[121363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:39:59 compute-1 sudo[121363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:59 compute-1 sudo[121363]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:39:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:59.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:59 compute-1 python3.9[121362]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:59 compute-1 sudo[121360]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:59 compute-1 sudo[121508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzmyhcuuuwnrclgmaknprzdoqknekfft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063599.241291-1007-7430215030349/AnsiballZ_copy.py'
Nov 25 09:39:59 compute-1 sudo[121508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:39:59 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:59 compute-1 python3.9[121510]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063599.241291-1007-7430215030349/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:00 compute-1 sudo[121508]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:00.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:00 compute-1 sudo[121660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npmxwvjkwiuojmprqqazgskmmjmaswoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063600.1827078-1054-55758074542561/AnsiballZ_file.py'
Nov 25 09:40:00 compute-1 sudo[121660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:00 compute-1 python3.9[121662]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:00 compute-1 sudo[121660]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:00 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b480040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:00 compute-1 ceph-mon[79643]: pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:40:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:40:00 compute-1 ceph-mon[79643]: overall HEALTH_OK
Nov 25 09:40:00 compute-1 sudo[121812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbyglpaluvzhqtvkiafxtrrarvshbxys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063600.6691146-1079-269544465019077/AnsiballZ_stat.py'
Nov 25 09:40:00 compute-1 sudo[121812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:00 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b54004100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:00 compute-1 python3.9[121814]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:01 compute-1 sudo[121812]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:01 compute-1 sudo[121936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoigryyxfkokhudoqaxqxudnyicndckn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063600.6691146-1079-269544465019077/AnsiballZ_copy.py'
Nov 25 09:40:01 compute-1 sudo[121936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:01 : epoch 692578fe : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:40:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:01 : epoch 692578fe : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:40:01 compute-1 python3.9[121938]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063600.6691146-1079-269544465019077/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:01 compute-1 sudo[121936]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:01.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:01 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:02.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:02 compute-1 sshd-session[115665]: Connection closed by 192.168.122.30 port 47790
Nov 25 09:40:02 compute-1 sshd-session[115662]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:40:02 compute-1 systemd[1]: session-45.scope: Deactivated successfully.
Nov 25 09:40:02 compute-1 systemd[1]: session-45.scope: Consumed 16.022s CPU time.
Nov 25 09:40:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:02 compute-1 systemd-logind[746]: Session 45 logged out. Waiting for processes to exit.
Nov 25 09:40:02 compute-1 systemd-logind[746]: Removed session 45.
Nov 25 09:40:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:02 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:02 compute-1 ceph-mon[79643]: pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:40:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:02 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b480049e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:03.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:03 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b54004280 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:04.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:04 : epoch 692578fe : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:40:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:04 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:04 compute-1 ceph-mon[79643]: pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:40:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:04 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:05.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:05 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b480049e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:06.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:06 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b54004ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:06 compute-1 ceph-mon[79643]: pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:40:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:06 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:07.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:07 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:08.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:08 compute-1 sshd-session[121966]: Accepted publickey for zuul from 192.168.122.30 port 52432 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:40:08 compute-1 systemd-logind[746]: New session 46 of user zuul.
Nov 25 09:40:08 compute-1 systemd[1]: Started Session 46 of User zuul.
Nov 25 09:40:08 compute-1 sshd-session[121966]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:40:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:08 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b480049e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:08 compute-1 ceph-mon[79643]: pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:40:08 compute-1 sudo[122119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfeoqqztytydcqwnbtubymegxwyvyott ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063608.5048223-27-125471323157870/AnsiballZ_file.py'
Nov 25 09:40:08 compute-1 sudo[122119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:08 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b54004ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:09 compute-1 python3.9[122121]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:09 compute-1 sudo[122119]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:09 compute-1 sudo[122272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlqpmjqrbbifjogeoajjfxsywhwrypws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063609.1975353-63-271377658974703/AnsiballZ_stat.py'
Nov 25 09:40:09 compute-1 sudo[122272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:09.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:09 compute-1 python3.9[122274]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:09 compute-1 sudo[122272]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:09 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:10 compute-1 sudo[122395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeocybuhmhgnkljjgelrektrggtmwpqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063609.1975353-63-271377658974703/AnsiballZ_copy.py'
Nov 25 09:40:10 compute-1 sudo[122395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:10.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:10 compute-1 python3.9[122397]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063609.1975353-63-271377658974703/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=366a48c0bc0104e6b502b94bc86d9db21512d98a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:10 compute-1 sudo[122395]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:10 compute-1 sudo[122547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmskvzxpngazjoyjqylklbdltmlsqfqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063610.2987673-63-85255698073505/AnsiballZ_stat.py'
Nov 25 09:40:10 compute-1 sudo[122547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:10 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:10 compute-1 python3.9[122549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:10 compute-1 sudo[122547]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:10 compute-1 ceph-mon[79643]: pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:40:10 compute-1 sudo[122670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqlaeaucbcdftomuzinvuwvmhpyyzicu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063610.2987673-63-85255698073505/AnsiballZ_copy.py'
Nov 25 09:40:10 compute-1 sudo[122670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:10 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b48005e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:11 compute-1 python3.9[122672]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063610.2987673-63-85255698073505/.source.conf _original_basename=ceph.conf follow=False checksum=a12b603cb850b5616045745d010769596d2b9016 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:11 compute-1 sudo[122670]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094011 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:40:11 compute-1 sshd-session[121969]: Connection closed by 192.168.122.30 port 52432
Nov 25 09:40:11 compute-1 sshd-session[121966]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:40:11 compute-1 systemd[1]: session-46.scope: Deactivated successfully.
Nov 25 09:40:11 compute-1 systemd[1]: session-46.scope: Consumed 1.828s CPU time.
Nov 25 09:40:11 compute-1 systemd-logind[746]: Session 46 logged out. Waiting for processes to exit.
Nov 25 09:40:11 compute-1 systemd-logind[746]: Removed session 46.
Nov 25 09:40:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:40:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:11.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:40:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:11 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b54004ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:12.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:12 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:12 compute-1 ceph-mon[79643]: pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:40:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:12 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 09:40:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:13.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 09:40:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:13 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:40:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:14.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:40:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:14 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:14 compute-1 ceph-mon[79643]: pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:40:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:14 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:15.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:40:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:15 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:40:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:16.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:40:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:16 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b54005c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:16 compute-1 ceph-mon[79643]: pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:40:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:16 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b54005c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:17 compute-1 sshd-session[122700]: Accepted publickey for zuul from 192.168.122.30 port 50182 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:40:17 compute-1 systemd-logind[746]: New session 47 of user zuul.
Nov 25 09:40:17 compute-1 systemd[1]: Started Session 47 of User zuul.
Nov 25 09:40:17 compute-1 sshd-session[122700]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:40:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:17.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:17 compute-1 ceph-mon[79643]: pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:40:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:17 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b48005e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:17 compute-1 python3.9[122854]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:40:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:40:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:18.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:40:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:18 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:18 compute-1 sudo[123008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbcjzbhnlnxyhzzqibavsxiguulijvcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063618.3858273-63-155530614862206/AnsiballZ_file.py'
Nov 25 09:40:18 compute-1 sudo[123008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:18 compute-1 python3.9[123010]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:18 compute-1 sudo[123008]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:18 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:19 compute-1 sudo[123160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qovzzhxtlzwbmxeunmlwsxbyhqbeoltn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063618.9743516-63-233443725396313/AnsiballZ_file.py'
Nov 25 09:40:19 compute-1 sudo[123160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:19 compute-1 python3.9[123162]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:19 compute-1 sudo[123160]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:19 compute-1 sudo[123215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:40:19 compute-1 sudo[123215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:19 compute-1 sudo[123215]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:19.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:19 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b54005c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:19 compute-1 python3.9[123338]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:40:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:20.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:20 compute-1 ceph-mon[79643]: pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:40:20 compute-1 sudo[123488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajjjvcjagvqfpsuvuzycbbtjgjzbpzmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063620.2145114-132-16560997960958/AnsiballZ_seboolean.py'
Nov 25 09:40:20 compute-1 sudo[123488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:20 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b48005e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:20 compute-1 python3.9[123490]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 09:40:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:21 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b180040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:21 compute-1 sudo[123488]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:21.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:21 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b2c005e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:22 compute-1 sudo[123646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjtmojjzucvlxomfejhfwfaggzyamedy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063621.8157785-162-218018668741742/AnsiballZ_setup.py'
Nov 25 09:40:22 compute-1 dbus-broker-launch[736]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 25 09:40:22 compute-1 sudo[123646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:22.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:22 compute-1 ceph-mon[79643]: pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:40:22 compute-1 python3.9[123648]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:40:22 compute-1 sudo[123646]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:22 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b54005c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:22 compute-1 sudo[123731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyagiygjaeevwqvtpsyxtbhtuihezhus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063621.8157785-162-218018668741742/AnsiballZ_dnf.py'
Nov 25 09:40:22 compute-1 sudo[123731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:22 compute-1 python3.9[123733]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:40:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:23 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c001e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:23.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:23 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c001e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:23 compute-1 sudo[123731]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:24.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:24 compute-1 ceph-mon[79643]: pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:24 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b48005e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:24 compute-1 sudo[123885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irqteiwhojpzbtsjvapmlbeznztsgfdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063624.0831323-198-243109684740040/AnsiballZ_systemd.py'
Nov 25 09:40:24 compute-1 sudo[123885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:24 compute-1 python3.9[123887]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:40:24 compute-1 sudo[123885]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:25 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b54005c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:25 compute-1 sudo[124041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etgbhdpilzdsnmfkddazgzqhfcsiqlsh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764063625.086308-222-182861071141046/AnsiballZ_edpm_nftables_snippet.py'
Nov 25 09:40:25 compute-1 sudo[124041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:25.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:25 compute-1 python3[124043]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 25 09:40:25 compute-1 sudo[124041]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:25 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b5c011850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:26 compute-1 sudo[124193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oceoiwrrsvrflxlxviqmeblhrcfmxlhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063625.8236585-249-138334859882421/AnsiballZ_file.py'
Nov 25 09:40:26 compute-1 sudo[124193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:26.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:26 compute-1 python3.9[124195]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:26 compute-1 sudo[124193]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:26 compute-1 ceph-mon[79643]: pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:26 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c001e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:26 compute-1 sudo[124345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztdriegburwychxjtirzmuinxnztemsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063626.3449244-273-172834222176294/AnsiballZ_stat.py'
Nov 25 09:40:26 compute-1 sudo[124345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:26 compute-1 python3.9[124347]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:26 compute-1 sudo[124345]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:26 compute-1 sudo[124423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfqugyypckuyfheyosakyzfwuqyjosve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063626.3449244-273-172834222176294/AnsiballZ_file.py'
Nov 25 09:40:26 compute-1 sudo[124423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:27 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b48005e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:27 compute-1 python3.9[124425]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:27 compute-1 sudo[124423]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:27 compute-1 sudo[124576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smvhemszltwuqijhzfpcycxdmdkkxifm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063627.3130774-309-263056126087203/AnsiballZ_stat.py'
Nov 25 09:40:27 compute-1 sudo[124576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:27.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:27 compute-1 python3.9[124578]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:27 compute-1 sudo[124576]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:27 compute-1 sudo[124654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shxtlhxhibndtxdiavtwpmuphbmfjtty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063627.3130774-309-263056126087203/AnsiballZ_file.py'
Nov 25 09:40:27 compute-1 sudo[124654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:27 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b54005c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:27 compute-1 python3.9[124656]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.pz3y0kyh recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:27 compute-1 sudo[124654]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:28.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:28 compute-1 ceph-mon[79643]: pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:40:28 compute-1 sudo[124806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzwjlevgriqnjaapdgnmuoajinhclche ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063628.1626875-345-155952923997727/AnsiballZ_stat.py'
Nov 25 09:40:28 compute-1 sudo[124806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:28 compute-1 python3.9[124808]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:28 compute-1 sudo[124806]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:28 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b5c011850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:28 compute-1 sudo[124884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmyozvjqezgqobpwhtqrwsnaqecycxnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063628.1626875-345-155952923997727/AnsiballZ_file.py'
Nov 25 09:40:28 compute-1 sudo[124884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:28 compute-1 python3.9[124886]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:28 compute-1 sudo[124884]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:29 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c001e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:29 compute-1 sudo[125037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gduqdvbdcblvzunacclyuquoghubqvln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063629.078787-384-49619156028722/AnsiballZ_command.py'
Nov 25 09:40:29 compute-1 sudo[125037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:29.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:29 compute-1 python3.9[125039]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:29 compute-1 sudo[125037]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:29 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b48005e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:30.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:30 compute-1 sudo[125190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tysfmrfwspqxdvcqatzksooujrgjnizj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764063629.8745723-408-9539076644040/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 09:40:30 compute-1 sudo[125190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:30 compute-1 ceph-mon[79643]: pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:40:30 compute-1 python3[125192]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 09:40:30 compute-1 sudo[125190]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:30 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b54005c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:30 compute-1 sudo[125342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svenhcikbfomdoiljligsjjoxsdphfda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063630.531529-432-131508239131642/AnsiballZ_stat.py'
Nov 25 09:40:30 compute-1 sudo[125342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:30 compute-1 python3.9[125344]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:30 compute-1 sudo[125342]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:31 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b5c011850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:31 compute-1 sudo[125468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xouyxtmzitnzvksczyeitmmpqhglksav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063630.531529-432-131508239131642/AnsiballZ_copy.py'
Nov 25 09:40:31 compute-1 sudo[125468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:31 compute-1 python3.9[125470]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063630.531529-432-131508239131642/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:31 compute-1 sudo[125468]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:31.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:31 compute-1 sudo[125620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbdybgrcsyozqwebwqymqaunhwxquges ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063631.5899765-477-150173458954201/AnsiballZ_stat.py'
Nov 25 09:40:31 compute-1 sudo[125620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:31 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b0c004a10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:31 compute-1 python3.9[125622]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:31 compute-1 sudo[125620]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:32.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:32 compute-1 sudo[125745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmuykzmkcebtmxoegvexvmfwokhswkqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063631.5899765-477-150173458954201/AnsiballZ_copy.py'
Nov 25 09:40:32 compute-1 sudo[125745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:32 compute-1 ceph-mon[79643]: pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:40:32 compute-1 python3.9[125747]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063631.5899765-477-150173458954201/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:32 compute-1 sudo[125745]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[103113]: 25/11/2025 09:40:32 : epoch 692578fe : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7b48005e70 fd 38 proxy ignored for local
Nov 25 09:40:32 compute-1 kernel: ganesha.nfsd[115718]: segfault at 50 ip 00007f7bc8f0b32e sp 00007f7b8affc210 error 4 in libntirpc.so.5.8[7f7bc8ef0000+2c000] likely on CPU 2 (core 0, socket 2)
Nov 25 09:40:32 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:40:32 compute-1 systemd[1]: Started Process Core Dump (PID 125772/UID 0).
Nov 25 09:40:32 compute-1 sudo[125899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qojbnwyqraqcvmyemhnvtaxnxodooqtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063632.6875613-522-32698124384653/AnsiballZ_stat.py'
Nov 25 09:40:32 compute-1 sudo[125899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:33 compute-1 python3.9[125901]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:33 compute-1 sudo[125899]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:33 compute-1 sudo[126025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojfxnbhluqbloiwvfwruvkwwnutgwjlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063632.6875613-522-32698124384653/AnsiballZ_copy.py'
Nov 25 09:40:33 compute-1 sudo[126025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:33 compute-1 python3.9[126027]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063632.6875613-522-32698124384653/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:33 compute-1 sudo[126025]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:33.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:33 compute-1 systemd-coredump[125773]: Process 103117 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 59:
                                                    #0  0x00007f7bc8f0b32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:40:33 compute-1 systemd[1]: systemd-coredump@2-125772-0.service: Deactivated successfully.
Nov 25 09:40:33 compute-1 systemd[1]: systemd-coredump@2-125772-0.service: Consumed 1.029s CPU time.
Nov 25 09:40:33 compute-1 podman[126129]: 2025-11-25 09:40:33.798397453 +0000 UTC m=+0.021753974 container died f79e1654075b660104ebd7026ded15337882cfa104df9d40982b65afb70ac2b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:40:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-a5c4c9245584020db564b42cd11ad45e4cd1b7946e29a1201e68cfac609d1dd9-merged.mount: Deactivated successfully.
Nov 25 09:40:33 compute-1 podman[126129]: 2025-11-25 09:40:33.816220785 +0000 UTC m=+0.039577305 container remove f79e1654075b660104ebd7026ded15337882cfa104df9d40982b65afb70ac2b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:40:33 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:40:33 compute-1 sudo[126211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxahjzwmpdzdaoadrtskixaozhvcrvsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063633.6815026-567-234331547214206/AnsiballZ_stat.py'
Nov 25 09:40:33 compute-1 sudo[126211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:33 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Failed with result 'exit-code'.
Nov 25 09:40:33 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.112s CPU time.
Nov 25 09:40:34 compute-1 python3.9[126216]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:34 compute-1 sudo[126211]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:34.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:34 compute-1 ceph-mon[79643]: pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:34 compute-1 sudo[126341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxnbnbxfggynlkhojhzbypnqegempaqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063633.6815026-567-234331547214206/AnsiballZ_copy.py'
Nov 25 09:40:34 compute-1 sudo[126341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:34 compute-1 python3.9[126343]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063633.6815026-567-234331547214206/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:34 compute-1 sudo[126341]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:34 compute-1 sudo[126493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sebsoygtuousfhiptjedrtwjlxjlrprg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063634.6377761-612-80854229116011/AnsiballZ_stat.py'
Nov 25 09:40:34 compute-1 sudo[126493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:35 compute-1 python3.9[126495]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:35 compute-1 sudo[126493]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:35 compute-1 sudo[126619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwwpmdemnoijilaerwffetasazciihva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063634.6377761-612-80854229116011/AnsiballZ_copy.py'
Nov 25 09:40:35 compute-1 sudo[126619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:35 compute-1 python3.9[126621]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063634.6377761-612-80854229116011/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:35 compute-1 sudo[126619]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.002000016s ======
Nov 25 09:40:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:35.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000016s
Nov 25 09:40:35 compute-1 sudo[126771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvhevsbozjfetixuzmqkqbnyliyapgnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063635.818373-657-24944058945075/AnsiballZ_file.py'
Nov 25 09:40:35 compute-1 sudo[126771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:36.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:36 compute-1 python3.9[126773]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:36 compute-1 sudo[126771]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:36 compute-1 ceph-mon[79643]: pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:36 compute-1 sudo[126923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niwrinrkxnknlokpvvrtyncyopgsgzxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063636.3233669-681-129105500848316/AnsiballZ_command.py'
Nov 25 09:40:36 compute-1 sudo[126923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:36 compute-1 python3.9[126925]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:36 compute-1 sudo[126923]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:37 compute-1 sudo[127078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhtcubnzkprualdbyitfvxmbdcdoiosc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063636.8465245-705-137730349012000/AnsiballZ_blockinfile.py'
Nov 25 09:40:37 compute-1 sudo[127078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:37 compute-1 python3.9[127080]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:37 compute-1 sudo[127078]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:40:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:37.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:40:37 compute-1 sudo[127231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjfeexstsrknufelpacgezydxuynoyvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063637.572767-732-77383834451207/AnsiballZ_command.py'
Nov 25 09:40:37 compute-1 sudo[127231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:37 compute-1 python3.9[127233]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:37 compute-1 sudo[127231]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:38.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:38 compute-1 sudo[127384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baymsyuzudjoxzzgzcvxsolalkmyjgjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063638.0864851-756-77966894244963/AnsiballZ_stat.py'
Nov 25 09:40:38 compute-1 sudo[127384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:38 compute-1 ceph-mon[79643]: pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:40:38 compute-1 python3.9[127386]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:40:38 compute-1 sudo[127384]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094038 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:40:38 compute-1 sudo[127538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfneeacibvvywoonlbqnspoviuxtehdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063638.609675-780-118216795610806/AnsiballZ_command.py'
Nov 25 09:40:38 compute-1 sudo[127538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:39 compute-1 python3.9[127540]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:39 compute-1 sudo[127538]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:39 compute-1 sudo[127694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdqrzowwehozinrifoondeoqeprajnlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063639.3421125-804-88749979441906/AnsiballZ_file.py'
Nov 25 09:40:39 compute-1 sudo[127694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:40:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:39.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:40:39 compute-1 sudo[127696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:40:39 compute-1 sudo[127696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:39 compute-1 sudo[127696]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:39 compute-1 python3.9[127697]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:39 compute-1 sudo[127694]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:40:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:40.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:40:40 compute-1 ceph-mon[79643]: pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:40:40 compute-1 python3.9[127871]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:40:41 compute-1 sudo[128023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcmwfsihrejnywohuhpxhguzbrsuypfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063641.349722-924-48777112013017/AnsiballZ_command.py'
Nov 25 09:40:41 compute-1 sudo[128023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:41.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:41 compute-1 python3.9[128025]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:41 compute-1 ovs-vsctl[128026]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 25 09:40:41 compute-1 sudo[128023]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:42.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:42 compute-1 sudo[128176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-segeqqkfbwhpamqvxraycptludakaluo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063641.9384727-951-260367694619523/AnsiballZ_command.py'
Nov 25 09:40:42 compute-1 sudo[128176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:42 compute-1 ceph-mon[79643]: pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:42 compute-1 python3.9[128178]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:42 compute-1 sudo[128176]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:42 compute-1 sudo[128331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bguwanmrrrwdfuhivzbnldtcfitsngzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063642.6011739-975-91704297972628/AnsiballZ_command.py'
Nov 25 09:40:42 compute-1 sudo[128331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:42 compute-1 python3.9[128333]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:42 compute-1 ovs-vsctl[128334]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 25 09:40:42 compute-1 sudo[128331]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:43 compute-1 python3.9[128485]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:40:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:43.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:43 compute-1 sudo[128637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjdznuniaopknrmrouhaauozjbrgvkum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063643.6927884-1026-83334516359139/AnsiballZ_file.py'
Nov 25 09:40:43 compute-1 sudo[128637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:43 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Scheduled restart job, restart counter is at 3.
Nov 25 09:40:43 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:40:43 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.112s CPU time.
Nov 25 09:40:43 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:40:44 compute-1 python3.9[128639]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:44 compute-1 sudo[128637]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:44 compute-1 podman[128698]: 2025-11-25 09:40:44.128242515 +0000 UTC m=+0.026906946 container create 12bb688e435797a995d480c2cc7bff94fb222f571369354c2e6b8dccea18b617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:40:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:44.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9b98b091317e271c042df67dbc23661330ca34effeb05258fd75842fb370aa/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9b98b091317e271c042df67dbc23661330ca34effeb05258fd75842fb370aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9b98b091317e271c042df67dbc23661330ca34effeb05258fd75842fb370aa/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9b98b091317e271c042df67dbc23661330ca34effeb05258fd75842fb370aa/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.yfzsxe-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:44 compute-1 podman[128698]: 2025-11-25 09:40:44.172697163 +0000 UTC m=+0.071361604 container init 12bb688e435797a995d480c2cc7bff94fb222f571369354c2e6b8dccea18b617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 25 09:40:44 compute-1 podman[128698]: 2025-11-25 09:40:44.176310578 +0000 UTC m=+0.074975009 container start 12bb688e435797a995d480c2cc7bff94fb222f571369354c2e6b8dccea18b617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:40:44 compute-1 bash[128698]: 12bb688e435797a995d480c2cc7bff94fb222f571369354c2e6b8dccea18b617
Nov 25 09:40:44 compute-1 podman[128698]: 2025-11-25 09:40:44.117098681 +0000 UTC m=+0.015763132 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:40:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:44 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:40:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:44 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:40:44 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:40:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:44 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:40:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:44 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:40:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:44 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:40:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:44 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:40:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:44 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:40:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:44 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:40:44 compute-1 ceph-mon[79643]: pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:40:44 compute-1 sudo[128881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbyoiwreojzenncuswyzwdnqdkuudngs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063644.2262022-1050-26675039593655/AnsiballZ_stat.py'
Nov 25 09:40:44 compute-1 sudo[128881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:44 compute-1 python3.9[128883]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:44 compute-1 sudo[128881]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:44 compute-1 sudo[128959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnhilvqarquorvhablldjjtcyduffjbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063644.2262022-1050-26675039593655/AnsiballZ_file.py'
Nov 25 09:40:44 compute-1 sudo[128959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:44 compute-1 python3.9[128961]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:44 compute-1 sudo[128959]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:45 compute-1 sudo[129112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tltafyaykfuaqxkvnltleuioogsjwlkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063645.123462-1050-86718187017845/AnsiballZ_stat.py'
Nov 25 09:40:45 compute-1 sudo[129112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:40:45 compute-1 python3.9[129114]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:45 compute-1 sudo[129112]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:40:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:45.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:40:45 compute-1 sudo[129190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjvyvojsjcofxwcwlrgjnsecxzbfcmll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063645.123462-1050-86718187017845/AnsiballZ_file.py'
Nov 25 09:40:45 compute-1 sudo[129190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:45 compute-1 python3.9[129192]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:45 compute-1 sudo[129190]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:46.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:46 compute-1 sudo[129342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwcbpoiyggzkmyquknmdalzgnyptlebt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063646.0001202-1119-209686257118070/AnsiballZ_file.py'
Nov 25 09:40:46 compute-1 sudo[129342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:46 compute-1 python3.9[129344]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:46 compute-1 sudo[129342]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:46 compute-1 ceph-mon[79643]: pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:40:46 compute-1 sudo[129494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skrfdjftnlornnxcqmzubtynybykpmjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063646.5011141-1143-38573116302874/AnsiballZ_stat.py'
Nov 25 09:40:46 compute-1 sudo[129494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:46 compute-1 python3.9[129496]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:46 compute-1 sudo[129494]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:47 compute-1 sudo[129572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzfjakqsxermoqftcqfnftydfwpxdypq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063646.5011141-1143-38573116302874/AnsiballZ_file.py'
Nov 25 09:40:47 compute-1 sudo[129572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:47 compute-1 python3.9[129574]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:47 compute-1 sudo[129572]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:47.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:47 compute-1 sudo[129725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkrrikfecdovadyxpjywdhecitpppwli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063647.3896003-1179-76469642917831/AnsiballZ_stat.py'
Nov 25 09:40:47 compute-1 sudo[129725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:47 compute-1 python3.9[129727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:47 compute-1 sudo[129725]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.748553) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647748594, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2461, "num_deletes": 251, "total_data_size": 6380482, "memory_usage": 6473752, "flush_reason": "Manual Compaction"}
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647755319, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2610477, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10611, "largest_seqno": 13067, "table_properties": {"data_size": 2602695, "index_size": 4276, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 20252, "raw_average_key_size": 21, "raw_value_size": 2585438, "raw_average_value_size": 2698, "num_data_blocks": 187, "num_entries": 958, "num_filter_entries": 958, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063447, "oldest_key_time": 1764063447, "file_creation_time": 1764063647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 6925 microseconds, and 4695 cpu microseconds.
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.755406) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2610477 bytes OK
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.755548) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.755901) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.755912) EVENT_LOG_v1 {"time_micros": 1764063647755909, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.755921) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6369352, prev total WAL file size 6369352, number of live WAL files 2.
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.757051) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2549KB)], [21(13MB)]
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647757071, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 16472136, "oldest_snapshot_seqno": -1}
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4454 keys, 14454046 bytes, temperature: kUnknown
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647791328, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14454046, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14419644, "index_size": 22196, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 111795, "raw_average_key_size": 25, "raw_value_size": 14333782, "raw_average_value_size": 3218, "num_data_blocks": 957, "num_entries": 4454, "num_filter_entries": 4454, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764063647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.791516) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14454046 bytes
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.792090) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 480.2 rd, 421.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 13.2 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(11.8) write-amplify(5.5) OK, records in: 4880, records dropped: 426 output_compression: NoCompression
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.792107) EVENT_LOG_v1 {"time_micros": 1764063647792099, "job": 10, "event": "compaction_finished", "compaction_time_micros": 34304, "compaction_time_cpu_micros": 21568, "output_level": 6, "num_output_files": 1, "total_output_size": 14454046, "num_input_records": 4880, "num_output_records": 4454, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647792551, "job": 10, "event": "table_file_deletion", "file_number": 23}
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647794389, "job": 10, "event": "table_file_deletion", "file_number": 21}
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.757024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.794436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.794439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.794440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.794441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:47 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:47.794442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:47 compute-1 sudo[129803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prwmrdefedinjuhelllexbnlsaybdxmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063647.3896003-1179-76469642917831/AnsiballZ_file.py'
Nov 25 09:40:47 compute-1 sudo[129803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:48 compute-1 python3.9[129805]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:48 compute-1 sudo[129803]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:48.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:48 compute-1 ceph-mon[79643]: pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:40:48 compute-1 sudo[129955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ridcvsavshlwzmmfwjqkapvxmqecjsoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063648.3832428-1215-174340467680067/AnsiballZ_systemd.py'
Nov 25 09:40:48 compute-1 sudo[129955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:48 compute-1 python3.9[129957]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:40:48 compute-1 systemd[1]: Reloading.
Nov 25 09:40:48 compute-1 systemd-rc-local-generator[129977]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:40:48 compute-1 systemd-sysv-generator[129981]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:40:49 compute-1 sudo[129955]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:49 compute-1 sudo[130145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iifvfyvyxbmbtcqjsyfcevztkalfejfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063649.2250626-1239-187757905556399/AnsiballZ_stat.py'
Nov 25 09:40:49 compute-1 sudo[130145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:49.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:49 compute-1 python3.9[130147]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:49 compute-1 sudo[130145]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:49 compute-1 sudo[130223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqmlypovpehwlpvoycrieskaszqhzach ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063649.2250626-1239-187757905556399/AnsiballZ_file.py'
Nov 25 09:40:49 compute-1 sudo[130223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:49 compute-1 python3.9[130225]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:49 compute-1 sudo[130223]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:50.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:50 compute-1 sudo[130375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cixqlzihutkgxvjiwiybibvyzhqzsudl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063650.0678682-1275-10719114304232/AnsiballZ_stat.py'
Nov 25 09:40:50 compute-1 sudo[130375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:50 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:40:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:50 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:40:50 compute-1 ceph-mon[79643]: pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:40:50 compute-1 python3.9[130377]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:50 compute-1 sudo[130375]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:50 compute-1 sudo[130453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iynvnkqlwnonyqdqqalfwfjspsgpccjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063650.0678682-1275-10719114304232/AnsiballZ_file.py'
Nov 25 09:40:50 compute-1 sudo[130453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:50 compute-1 python3.9[130455]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:50 compute-1 sudo[130453]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:51 compute-1 sudo[130606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbwcsnkohrthaapvkzxatfedpvbueqrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063650.937335-1311-234489988917948/AnsiballZ_systemd.py'
Nov 25 09:40:51 compute-1 sudo[130606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.375238) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651375256, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 290, "num_deletes": 251, "total_data_size": 123080, "memory_usage": 129528, "flush_reason": "Manual Compaction"}
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651376291, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 81017, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13072, "largest_seqno": 13357, "table_properties": {"data_size": 79107, "index_size": 138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4703, "raw_average_key_size": 17, "raw_value_size": 75385, "raw_average_value_size": 279, "num_data_blocks": 6, "num_entries": 270, "num_filter_entries": 270, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063648, "oldest_key_time": 1764063648, "file_creation_time": 1764063651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 1071 microseconds, and 466 cpu microseconds.
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.376310) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 81017 bytes OK
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.376318) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.376796) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.376805) EVENT_LOG_v1 {"time_micros": 1764063651376802, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.376812) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 120931, prev total WAL file size 120931, number of live WAL files 2.
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.377280) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(79KB)], [24(13MB)]
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651377298, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 14535063, "oldest_snapshot_seqno": -1}
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4214 keys, 11709109 bytes, temperature: kUnknown
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651405933, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 11709109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11677793, "index_size": 19686, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 107746, "raw_average_key_size": 25, "raw_value_size": 11597587, "raw_average_value_size": 2752, "num_data_blocks": 840, "num_entries": 4214, "num_filter_entries": 4214, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764063651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.406154) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11709109 bytes
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.406716) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 505.2 rd, 407.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.8 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(323.9) write-amplify(144.5) OK, records in: 4724, records dropped: 510 output_compression: NoCompression
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.406729) EVENT_LOG_v1 {"time_micros": 1764063651406723, "job": 12, "event": "compaction_finished", "compaction_time_micros": 28771, "compaction_time_cpu_micros": 16467, "output_level": 6, "num_output_files": 1, "total_output_size": 11709109, "num_input_records": 4724, "num_output_records": 4214, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651407082, "job": 12, "event": "table_file_deletion", "file_number": 26}
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651408637, "job": 12, "event": "table_file_deletion", "file_number": 24}
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.377019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.408721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.408723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.408724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.408726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:51 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:40:51.408726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:51 compute-1 python3.9[130608]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:40:51 compute-1 systemd[1]: Reloading.
Nov 25 09:40:51 compute-1 systemd-sysv-generator[130635]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:40:51 compute-1 systemd-rc-local-generator[130632]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:40:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:51.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:51 compute-1 systemd[1]: Starting Create netns directory...
Nov 25 09:40:51 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 09:40:51 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 09:40:51 compute-1 systemd[1]: Finished Create netns directory.
Nov 25 09:40:51 compute-1 sudo[130606]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:52.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:52 compute-1 sudo[130800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaubnkesnvrltrteqekhuhtygcnkikib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063652.1127512-1341-21693703540528/AnsiballZ_file.py'
Nov 25 09:40:52 compute-1 sudo[130800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:52 compute-1 ceph-mon[79643]: pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:40:52 compute-1 python3.9[130802]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:52 compute-1 sudo[130803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:40:52 compute-1 sudo[130803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:52 compute-1 sudo[130800]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:52 compute-1 sudo[130803]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:52 compute-1 sudo[130828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:40:52 compute-1 sudo[130828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:52 compute-1 sudo[131019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wduqcwrvfpqbzxchdihexvvqytbfsgqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063652.6206667-1365-208788289655619/AnsiballZ_stat.py'
Nov 25 09:40:52 compute-1 sudo[131019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:52 compute-1 sudo[130828]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:52 compute-1 python3.9[131021]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:52 compute-1 sudo[131019]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:53 compute-1 sudo[131155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoflphbpmqcxrpgfzglogjdjnwfepthg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063652.6206667-1365-208788289655619/AnsiballZ_copy.py'
Nov 25 09:40:53 compute-1 sudo[131155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:53 compute-1 python3.9[131157]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063652.6206667-1365-208788289655619/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:40:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:40:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:40:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:40:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:40:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:40:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:40:53 compute-1 sudo[131155]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:53.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:53 compute-1 sudo[131307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvuoshqhocmmpbwcrurbsomptawqnhxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063653.7710636-1416-250392169141319/AnsiballZ_file.py'
Nov 25 09:40:53 compute-1 sudo[131307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:54 compute-1 python3.9[131309]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:54 compute-1 sudo[131307]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:54.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:54 compute-1 ceph-mon[79643]: pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:40:54 compute-1 sudo[131459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejbzrkwqvxbzwhirdadbghaamhjskwla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063654.3102672-1440-137078241961203/AnsiballZ_stat.py'
Nov 25 09:40:54 compute-1 sudo[131459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:54 compute-1 python3.9[131461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:54 compute-1 sudo[131459]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:54 compute-1 sudo[131582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plkvharmgthytkbltvfvlaniihizizdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063654.3102672-1440-137078241961203/AnsiballZ_copy.py'
Nov 25 09:40:54 compute-1 sudo[131582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:55 compute-1 python3.9[131584]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063654.3102672-1440-137078241961203/.source.json _original_basename=.chtkkna0 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:55 compute-1 sudo[131582]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:55 compute-1 sudo[131735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfnoubybjnpxcpsxrhanxchtopmsdygs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063655.2577655-1485-44129357874813/AnsiballZ_file.py'
Nov 25 09:40:55 compute-1 sudo[131735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:55.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:55 compute-1 python3.9[131737]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:55 compute-1 sudo[131735]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:55 compute-1 sudo[131887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poqstevommwcdrgvtzvfyhtrlrxxkdbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063655.8083282-1509-154374355275928/AnsiballZ_stat.py'
Nov 25 09:40:56 compute-1 sudo[131887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:56 compute-1 sudo[131890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:40:56 compute-1 sudo[131890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:56 compute-1 sudo[131890]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:40:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:56.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:40:56 compute-1 sudo[131887]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:40:56 compute-1 sudo[132048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpzsrfevhsyxkfzeusgisynaxyhspcqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063655.8083282-1509-154374355275928/AnsiballZ_copy.py'
Nov 25 09:40:56 compute-1 sudo[132048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:56 compute-1 ceph-mon[79643]: pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:40:56 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:40:56 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:40:56 compute-1 sudo[132048]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:56 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:57 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55b05acdf040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:57 compute-1 sudo[132204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwlybtzdqhrrwuugytdldbcdnagugyxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063656.9180503-1560-117579510205030/AnsiballZ_container_config_data.py'
Nov 25 09:40:57 compute-1 sudo[132204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:57 compute-1 python3.9[132206]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 25 09:40:57 compute-1 sudo[132204]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:57.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:57 compute-1 sudo[132356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwocdbjsbacltryysjrxdghxteqbtnev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063657.5732284-1587-28765500561557/AnsiballZ_container_config_hash.py'
Nov 25 09:40:57 compute-1 sudo[132356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:57 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58c8002380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:58 compute-1 python3.9[132358]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 09:40:58 compute-1 sudo[132356]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:58.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:58 compute-1 ceph-mon[79643]: pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:40:58 compute-1 sudo[132508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkelajqtergserquoafdppeeqxmeddzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063658.2692692-1614-270233097028800/AnsiballZ_podman_container_info.py'
Nov 25 09:40:58 compute-1 sudo[132508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094058 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:40:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:58 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:58 compute-1 python3.9[132510]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 09:40:58 compute-1 sudo[132508]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:59 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:40:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:59.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:59 compute-1 sudo[132555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:40:59 compute-1 sudo[132555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:59 compute-1 sudo[132555]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:40:59 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55b05acdf960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:00 compute-1 sudo[132705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfdfiqhqdooothiuzzllnhyxusiudrav ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764063659.679426-1653-265252706438704/AnsiballZ_edpm_container_manage.py'
Nov 25 09:41:00 compute-1 sudo[132705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:41:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:00.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:41:00 compute-1 python3[132707]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 09:41:00 compute-1 ceph-mon[79643]: pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:41:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:41:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:00 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x55b05acdf960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:01 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x55b05acdf960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:41:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:01.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:41:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:01 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc008f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:02.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:02 compute-1 ceph-mon[79643]: pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:41:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:02 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc008f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:03 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc008f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:03.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:03 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x55b05acdf960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:41:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:04.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:41:04 compute-1 ceph-mon[79643]: pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:04 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc008f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:05 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc008f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:05 compute-1 podman[132718]: 2025-11-25 09:41:05.290219241 +0000 UTC m=+5.010633611 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 25 09:41:05 compute-1 podman[132818]: 2025-11-25 09:41:05.382167996 +0000 UTC m=+0.028223112 container create b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:41:05 compute-1 podman[132818]: 2025-11-25 09:41:05.368640367 +0000 UTC m=+0.014695493 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 25 09:41:05 compute-1 python3[132707]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 25 09:41:05 compute-1 sudo[132705]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:41:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:05.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:41:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:05 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc008f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:06.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:06 compute-1 ceph-mon[79643]: pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:06 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x55b05acdf960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:06 compute-1 sudo[132996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eawyryxuqedawoktsrdajwdisxlkdopa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063666.5312557-1677-201573248938293/AnsiballZ_stat.py'
Nov 25 09:41:06 compute-1 sudo[132996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:06 compute-1 python3.9[132998]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:41:06 compute-1 sudo[132996]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:07 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc008f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:07 compute-1 sudo[133151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffirbnhypllsaooxpmlpjachwgbntzqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063667.1321664-1704-232131448253070/AnsiballZ_file.py'
Nov 25 09:41:07 compute-1 sudo[133151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:07 compute-1 python3.9[133153]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:07 compute-1 sudo[133151]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:07.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:07 compute-1 sudo[133227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhvxxkqrdcilndxfnjlvidkyrxecchrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063667.1321664-1704-232131448253070/AnsiballZ_stat.py'
Nov 25 09:41:07 compute-1 sudo[133227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:07 compute-1 python3.9[133229]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:41:07 compute-1 sudo[133227]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:07 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc008f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:08 compute-1 sudo[133378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvfnldinuqhxjceorsirbmhdenyzdtrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063667.8262796-1704-227583977478347/AnsiballZ_copy.py'
Nov 25 09:41:08 compute-1 sudo[133378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:08.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:08 compute-1 python3.9[133380]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063667.8262796-1704-227583977478347/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:08 compute-1 sudo[133378]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:08 compute-1 ceph-mon[79643]: pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:08 compute-1 sudo[133454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdwocfbttdicwhubtkvrsuiiyvukcaqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063667.8262796-1704-227583977478347/AnsiballZ_systemd.py'
Nov 25 09:41:08 compute-1 sudo[133454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:08 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc008f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:08 compute-1 python3.9[133456]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:41:08 compute-1 systemd[1]: Reloading.
Nov 25 09:41:08 compute-1 systemd-rc-local-generator[133481]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:41:08 compute-1 systemd-sysv-generator[133484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:41:08 compute-1 sudo[133454]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:09 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x55b05acdd140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:09 compute-1 sudo[133564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpfanxcbkhrmnlghccbdsrfznndzzbjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063667.8262796-1704-227583977478347/AnsiballZ_systemd.py'
Nov 25 09:41:09 compute-1 sudo[133564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:09 compute-1 python3.9[133566]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:41:09 compute-1 systemd[1]: Reloading.
Nov 25 09:41:09 compute-1 systemd-rc-local-generator[133594]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:41:09 compute-1 systemd-sysv-generator[133597]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:41:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:09.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:09 compute-1 systemd[1]: Starting ovn_controller container...
Nov 25 09:41:09 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:41:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44582de3bcf54bfb9fc770950848ae39bc510af15e2dd5f9733072104d604f76/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 25 09:41:09 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1.
Nov 25 09:41:09 compute-1 podman[133608]: 2025-11-25 09:41:09.700970724 +0000 UTC m=+0.074999285 container init b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:41:09 compute-1 ovn_controller[133620]: + sudo -E kolla_set_configs
Nov 25 09:41:09 compute-1 podman[133608]: 2025-11-25 09:41:09.722485881 +0000 UTC m=+0.096514432 container start b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:41:09 compute-1 edpm-start-podman-container[133608]: ovn_controller
Nov 25 09:41:09 compute-1 systemd[1]: Created slice User Slice of UID 0.
Nov 25 09:41:09 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 25 09:41:09 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 25 09:41:09 compute-1 systemd[1]: Starting User Manager for UID 0...
Nov 25 09:41:09 compute-1 systemd[133649]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 25 09:41:09 compute-1 podman[133627]: 2025-11-25 09:41:09.783336535 +0000 UTC m=+0.053776270 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:41:09 compute-1 edpm-start-podman-container[133607]: Creating additional drop-in dependency for "ovn_controller" (b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1)
Nov 25 09:41:09 compute-1 systemd[1]: b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1-487e148e6e07829a.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 09:41:09 compute-1 systemd[1]: b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1-487e148e6e07829a.service: Failed with result 'exit-code'.
Nov 25 09:41:09 compute-1 systemd[1]: Reloading.
Nov 25 09:41:09 compute-1 systemd[133649]: Queued start job for default target Main User Target.
Nov 25 09:41:09 compute-1 systemd-rc-local-generator[133696]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:41:09 compute-1 systemd[133649]: Created slice User Application Slice.
Nov 25 09:41:09 compute-1 systemd[133649]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 25 09:41:09 compute-1 systemd[133649]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 09:41:09 compute-1 systemd[133649]: Reached target Paths.
Nov 25 09:41:09 compute-1 systemd[133649]: Reached target Timers.
Nov 25 09:41:09 compute-1 systemd[133649]: Starting D-Bus User Message Bus Socket...
Nov 25 09:41:09 compute-1 systemd[133649]: Starting Create User's Volatile Files and Directories...
Nov 25 09:41:09 compute-1 systemd-sysv-generator[133704]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:41:09 compute-1 systemd[133649]: Listening on D-Bus User Message Bus Socket.
Nov 25 09:41:09 compute-1 systemd[133649]: Reached target Sockets.
Nov 25 09:41:09 compute-1 systemd[133649]: Finished Create User's Volatile Files and Directories.
Nov 25 09:41:09 compute-1 systemd[133649]: Reached target Basic System.
Nov 25 09:41:09 compute-1 systemd[133649]: Reached target Main User Target.
Nov 25 09:41:09 compute-1 systemd[133649]: Startup finished in 105ms.
Nov 25 09:41:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:09 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc008f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:10 compute-1 systemd[1]: Started User Manager for UID 0.
Nov 25 09:41:10 compute-1 systemd[1]: Started ovn_controller container.
Nov 25 09:41:10 compute-1 systemd[1]: Started Session c1 of User root.
Nov 25 09:41:10 compute-1 sudo[133564]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:10 compute-1 ovn_controller[133620]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 09:41:10 compute-1 ovn_controller[133620]: INFO:__main__:Validating config file
Nov 25 09:41:10 compute-1 ovn_controller[133620]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 09:41:10 compute-1 ovn_controller[133620]: INFO:__main__:Writing out command to execute
Nov 25 09:41:10 compute-1 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 25 09:41:10 compute-1 ovn_controller[133620]: ++ cat /run_command
Nov 25 09:41:10 compute-1 ovn_controller[133620]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 09:41:10 compute-1 ovn_controller[133620]: + ARGS=
Nov 25 09:41:10 compute-1 ovn_controller[133620]: + sudo kolla_copy_cacerts
Nov 25 09:41:10 compute-1 systemd[1]: Started Session c2 of User root.
Nov 25 09:41:10 compute-1 ovn_controller[133620]: + [[ ! -n '' ]]
Nov 25 09:41:10 compute-1 ovn_controller[133620]: + . kolla_extend_start
Nov 25 09:41:10 compute-1 ovn_controller[133620]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 09:41:10 compute-1 ovn_controller[133620]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 25 09:41:10 compute-1 ovn_controller[133620]: + umask 0022
Nov 25 09:41:10 compute-1 ovn_controller[133620]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 25 09:41:10 compute-1 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 25 09:41:10 compute-1 NetworkManager[48856]: <info>  [1764063670.1272] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 25 09:41:10 compute-1 NetworkManager[48856]: <info>  [1764063670.1276] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:41:10 compute-1 NetworkManager[48856]: <info>  [1764063670.1283] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 25 09:41:10 compute-1 NetworkManager[48856]: <info>  [1764063670.1287] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 25 09:41:10 compute-1 NetworkManager[48856]: <info>  [1764063670.1290] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 09:41:10 compute-1 kernel: br-int: entered promiscuous mode
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00024|main|INFO|OVS feature set changed, force recompute.
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 09:41:10 compute-1 NetworkManager[48856]: <info>  [1764063670.1440] manager: (ovn-2c2076-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 09:41:10 compute-1 ovn_controller[133620]: 2025-11-25T09:41:10Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 09:41:10 compute-1 systemd-udevd[133750]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:41:10 compute-1 kernel: genev_sys_6081: entered promiscuous mode
Nov 25 09:41:10 compute-1 NetworkManager[48856]: <info>  [1764063670.1558] device (genev_sys_6081): carrier: link connected
Nov 25 09:41:10 compute-1 NetworkManager[48856]: <info>  [1764063670.1561] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 25 09:41:10 compute-1 systemd-udevd[133752]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:41:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:10.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:10 compute-1 NetworkManager[48856]: <info>  [1764063670.1879] manager: (ovn-a23dd6-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 25 09:41:10 compute-1 NetworkManager[48856]: <info>  [1764063670.1902] manager: (ovn-f116e4-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Nov 25 09:41:10 compute-1 sudo[133880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivmmpnxvqowdnayfjzzszbiebhmsscpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063670.2332606-1788-177567593648507/AnsiballZ_command.py'
Nov 25 09:41:10 compute-1 sudo[133880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:10 compute-1 ceph-mon[79643]: pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:41:10 compute-1 python3.9[133882]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:41:10 compute-1 ovs-vsctl[133883]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 25 09:41:10 compute-1 sudo[133880]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:10 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc008f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:10 compute-1 sudo[134033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-warsqfrejytphxdiiypfcuumykznklzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063670.7327619-1812-276881926298125/AnsiballZ_command.py'
Nov 25 09:41:10 compute-1 sudo[134033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:11 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58c8003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:11 compute-1 python3.9[134035]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:41:11 compute-1 ovs-vsctl[134037]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 25 09:41:11 compute-1 sudo[134033]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:11.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:11 compute-1 sudo[134189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmxqulnporyhinirafbnjelnzytmfdmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063671.5243592-1854-36885365682938/AnsiballZ_command.py'
Nov 25 09:41:11 compute-1 sudo[134189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:11 compute-1 python3.9[134191]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:41:11 compute-1 ovs-vsctl[134192]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 25 09:41:11 compute-1 sudo[134189]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:11 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x55b05acdd140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:12.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:12 compute-1 sshd-session[122703]: Connection closed by 192.168.122.30 port 50182
Nov 25 09:41:12 compute-1 sshd-session[122700]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:41:12 compute-1 systemd[1]: session-47.scope: Deactivated successfully.
Nov 25 09:41:12 compute-1 systemd[1]: session-47.scope: Consumed 40.203s CPU time.
Nov 25 09:41:12 compute-1 systemd-logind[746]: Session 47 logged out. Waiting for processes to exit.
Nov 25 09:41:12 compute-1 systemd-logind[746]: Removed session 47.
Nov 25 09:41:12 compute-1 ceph-mon[79643]: pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:41:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:12 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc00c430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:13 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc00c430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:13.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:13 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58c8004310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:14.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:14 compute-1 ceph-mon[79643]: pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:14 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x55b05acdd140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:15 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc00c430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:41:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:15.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:15 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc00c430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:16.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:16 compute-1 ceph-mon[79643]: pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:16 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58c8004310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:17 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x55b05acdda60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:17.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:17 compute-1 sshd-session[134220]: Accepted publickey for zuul from 192.168.122.30 port 38276 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:41:17 compute-1 kernel: ganesha.nfsd[132006]: segfault at 50 ip 00007f597854732e sp 00007f593fffe210 error 4 in libntirpc.so.5.8[7f597852c000+2c000] likely on CPU 2 (core 0, socket 2)
Nov 25 09:41:17 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:41:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[128714]: 25/11/2025 09:41:17 : epoch 6925799c : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58cc00c430 fd 38 proxy ignored for local
Nov 25 09:41:17 compute-1 systemd-logind[746]: New session 49 of user zuul.
Nov 25 09:41:17 compute-1 systemd[1]: Started Session 49 of User zuul.
Nov 25 09:41:17 compute-1 systemd[1]: Started Process Core Dump (PID 134223/UID 0).
Nov 25 09:41:17 compute-1 sshd-session[134220]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:41:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:18.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:18 compute-1 ceph-mon[79643]: pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:41:18 compute-1 python3.9[134375]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:41:19 compute-1 systemd-coredump[134224]: Process 128718 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 41:
                                                    #0  0x00007f597854732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:41:19 compute-1 systemd[1]: systemd-coredump@3-134223-0.service: Deactivated successfully.
Nov 25 09:41:19 compute-1 systemd[1]: systemd-coredump@3-134223-0.service: Consumed 1.079s CPU time.
Nov 25 09:41:19 compute-1 podman[134408]: 2025-11-25 09:41:19.190873645 +0000 UTC m=+0.020574244 container died 12bb688e435797a995d480c2cc7bff94fb222f571369354c2e6b8dccea18b617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:41:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-ac9b98b091317e271c042df67dbc23661330ca34effeb05258fd75842fb370aa-merged.mount: Deactivated successfully.
Nov 25 09:41:19 compute-1 podman[134408]: 2025-11-25 09:41:19.217186405 +0000 UTC m=+0.046886983 container remove 12bb688e435797a995d480c2cc7bff94fb222f571369354c2e6b8dccea18b617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:41:19 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:41:19 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Failed with result 'exit-code'.
Nov 25 09:41:19 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.052s CPU time.
Nov 25 09:41:19 compute-1 sudo[134566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlpjbwcykrcbxqiwpbqgofnkjbhthwgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063679.2201412-63-84024109576293/AnsiballZ_file.py'
Nov 25 09:41:19 compute-1 sudo[134566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:41:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:19.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:41:19 compute-1 sudo[134569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:41:19 compute-1 sudo[134569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:19 compute-1 sudo[134569]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:19 compute-1 python3.9[134568]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:19 compute-1 sudo[134566]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:19 compute-1 sudo[134743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plovzqqualcfjjdubibqtnwoorspamdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063679.8268938-63-264974786869417/AnsiballZ_file.py'
Nov 25 09:41:20 compute-1 sudo[134743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:20 compute-1 python3.9[134745]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:20 compute-1 sudo[134743]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:20.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:20 compute-1 systemd[1]: Stopping User Manager for UID 0...
Nov 25 09:41:20 compute-1 systemd[133649]: Activating special unit Exit the Session...
Nov 25 09:41:20 compute-1 systemd[133649]: Stopped target Main User Target.
Nov 25 09:41:20 compute-1 systemd[133649]: Stopped target Basic System.
Nov 25 09:41:20 compute-1 systemd[133649]: Stopped target Paths.
Nov 25 09:41:20 compute-1 systemd[133649]: Stopped target Sockets.
Nov 25 09:41:20 compute-1 systemd[133649]: Stopped target Timers.
Nov 25 09:41:20 compute-1 systemd[133649]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 09:41:20 compute-1 systemd[133649]: Closed D-Bus User Message Bus Socket.
Nov 25 09:41:20 compute-1 systemd[133649]: Stopped Create User's Volatile Files and Directories.
Nov 25 09:41:20 compute-1 systemd[133649]: Removed slice User Application Slice.
Nov 25 09:41:20 compute-1 systemd[133649]: Reached target Shutdown.
Nov 25 09:41:20 compute-1 systemd[133649]: Finished Exit the Session.
Nov 25 09:41:20 compute-1 systemd[133649]: Reached target Exit the Session.
Nov 25 09:41:20 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Nov 25 09:41:20 compute-1 systemd[1]: Stopped User Manager for UID 0.
Nov 25 09:41:20 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 25 09:41:20 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 25 09:41:20 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 25 09:41:20 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 25 09:41:20 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Nov 25 09:41:20 compute-1 sudo[134896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvkcnisepxexbgmllqvfpewgzzkfyxhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063680.27106-63-274878677185979/AnsiballZ_file.py'
Nov 25 09:41:20 compute-1 sudo[134896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:20 compute-1 ceph-mon[79643]: pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:20 compute-1 python3.9[134898]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:20 compute-1 sudo[134896]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:20 compute-1 sudo[135048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtmobvrlymexkkfbeocpowdjslxkcpfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063680.7064846-63-50666095269112/AnsiballZ_file.py'
Nov 25 09:41:20 compute-1 sudo[135048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:21 compute-1 python3.9[135050]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:21 compute-1 sudo[135048]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:21 compute-1 sudo[135201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmwruhkqflvmewgjfdcseqvjeslpzued ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063681.1526027-63-109367155122023/AnsiballZ_file.py'
Nov 25 09:41:21 compute-1 sudo[135201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:21 compute-1 python3.9[135203]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:21 compute-1 sudo[135201]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:21.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:22 compute-1 python3.9[135353]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:41:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:22.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:22 compute-1 sudo[135503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqtsnmzegcwtqmznxfetpplxvbaxflvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063682.2082262-195-198852899096872/AnsiballZ_seboolean.py'
Nov 25 09:41:22 compute-1 sudo[135503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:22 compute-1 ceph-mon[79643]: pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:41:22 compute-1 python3.9[135505]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 09:41:23 compute-1 sudo[135503]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:23.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:23 compute-1 python3.9[135656]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:24.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:24 compute-1 python3.9[135777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063683.3718848-219-63597891579000/.source follow=False _original_basename=haproxy.j2 checksum=deae64da24ad28f71dc47276f2e9f268f19a4519 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:24 compute-1 ceph-mon[79643]: pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094124 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:41:24 compute-1 python3.9[135927]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:25 compute-1 python3.9[136048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063684.4612563-264-151410904846052/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:25.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:25 compute-1 sudo[136199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbqjqekaygdihtrqjctjvsnvsrgenusy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063685.4455898-315-223269491669295/AnsiballZ_setup.py'
Nov 25 09:41:25 compute-1 sudo[136199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:25 compute-1 python3.9[136201]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:41:26 compute-1 sudo[136199]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:26.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:26 compute-1 sudo[136283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxzxvuhambnshawbixotieiwzapdtyyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063685.4455898-315-223269491669295/AnsiballZ_dnf.py'
Nov 25 09:41:26 compute-1 sudo[136283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:26 compute-1 python3.9[136285]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:41:26 compute-1 ceph-mon[79643]: pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:27 compute-1 sudo[136283]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:27.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:28.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:28 compute-1 sudo[136438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmsnctbdqnuwachoeooqabpiftpbetux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063687.7187817-351-22253546049922/AnsiballZ_systemd.py'
Nov 25 09:41:28 compute-1 sudo[136438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:28 compute-1 python3.9[136440]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:41:28 compute-1 sudo[136438]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:28 compute-1 ceph-mon[79643]: pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:41:28 compute-1 python3.9[136593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:29 compute-1 python3.9[136714]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063688.6346865-375-58213291172197/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:29 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Scheduled restart job, restart counter is at 4.
Nov 25 09:41:29 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:41:29 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.052s CPU time.
Nov 25 09:41:29 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:41:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:29.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:29 compute-1 podman[136903]: 2025-11-25 09:41:29.624810462 +0000 UTC m=+0.027242976 container create c92c0080df66c8a94af267e8902ece696b7263f84a0a745db5bcfad4e78adbbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:41:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd25132732b0c2e8e7734be381f8dc6ab3704d3a695953ef3643eeec1ffbfda5/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:41:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd25132732b0c2e8e7734be381f8dc6ab3704d3a695953ef3643eeec1ffbfda5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:41:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd25132732b0c2e8e7734be381f8dc6ab3704d3a695953ef3643eeec1ffbfda5/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:41:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd25132732b0c2e8e7734be381f8dc6ab3704d3a695953ef3643eeec1ffbfda5/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.yfzsxe-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:41:29 compute-1 podman[136903]: 2025-11-25 09:41:29.655695431 +0000 UTC m=+0.058127964 container init c92c0080df66c8a94af267e8902ece696b7263f84a0a745db5bcfad4e78adbbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:41:29 compute-1 podman[136903]: 2025-11-25 09:41:29.660298658 +0000 UTC m=+0.062731171 container start c92c0080df66c8a94af267e8902ece696b7263f84a0a745db5bcfad4e78adbbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 09:41:29 compute-1 bash[136903]: c92c0080df66c8a94af267e8902ece696b7263f84a0a745db5bcfad4e78adbbc
Nov 25 09:41:29 compute-1 podman[136903]: 2025-11-25 09:41:29.612998788 +0000 UTC m=+0.015431321 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:41:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:29 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:41:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:29 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:41:29 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:41:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:29 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:41:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:29 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:41:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:29 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:41:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:29 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:41:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:29 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:41:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:29 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:41:29 compute-1 python3.9[136901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:30 compute-1 python3.9[137077]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063689.4091203-375-251243606929496/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:30.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:30 compute-1 ceph-mon[79643]: pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:41:31 compute-1 python3.9[137228]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:31.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:31 compute-1 python3.9[137349]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063691.0760555-507-120768261345468/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:32 compute-1 python3.9[137499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:32.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:32 compute-1 python3.9[137620]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063691.8715065-507-17864202042620/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:32 compute-1 ceph-mon[79643]: pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:33 compute-1 python3.9[137770]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:41:33 compute-1 sudo[137923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkzmodjbbecagpcopdedwahepetqlaeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063693.2638226-621-261697690112763/AnsiballZ_file.py'
Nov 25 09:41:33 compute-1 sudo[137923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:33 compute-1 python3.9[137925]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:33 compute-1 sudo[137923]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:41:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:33.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:41:33 compute-1 sudo[138075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlznesapkqhpsvslashsdaclkzmvjhfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063693.7483566-645-108115820101580/AnsiballZ_stat.py'
Nov 25 09:41:33 compute-1 sudo[138075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:34 compute-1 python3.9[138077]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:34 compute-1 sudo[138075]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:34.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:34 compute-1 sudo[138153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqgzphfjjjyjrpubflrlsirekwskhmdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063693.7483566-645-108115820101580/AnsiballZ_file.py'
Nov 25 09:41:34 compute-1 sudo[138153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:34 compute-1 python3.9[138155]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:34 compute-1 sudo[138153]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:34 compute-1 ceph-mon[79643]: pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:34 compute-1 sudo[138305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioujrehgvycnzvmavixcepqrrvqkzrio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063694.5476124-645-37166339914799/AnsiballZ_stat.py'
Nov 25 09:41:34 compute-1 sudo[138305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:34 compute-1 python3.9[138307]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:34 compute-1 sudo[138305]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:35 compute-1 sudo[138383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igrebqbomgjtsefxwsoxolnfxvxhbdsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063694.5476124-645-37166339914799/AnsiballZ_file.py'
Nov 25 09:41:35 compute-1 sudo[138383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:35 compute-1 python3.9[138385]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:35 compute-1 sudo[138383]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:35 compute-1 sudo[138536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruwbdsvnzccyzipbhuwebjzntxntgndc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063695.324579-714-178549593608883/AnsiballZ_file.py'
Nov 25 09:41:35 compute-1 sudo[138536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:35.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:35 compute-1 python3.9[138538]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:35 compute-1 sudo[138536]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:35 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:41:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:35 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:41:35 compute-1 sudo[138688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyfatzoddwwdfqwzpluribqzwzggwvgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063695.7984898-738-278302523593831/AnsiballZ_stat.py'
Nov 25 09:41:35 compute-1 sudo[138688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:36 compute-1 python3.9[138690]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:36 compute-1 sudo[138688]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:36.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:36 compute-1 sudo[138766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mezhdvmgbqvsqejusevolbzsfgkfkvbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063695.7984898-738-278302523593831/AnsiballZ_file.py'
Nov 25 09:41:36 compute-1 sudo[138766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:36 compute-1 python3.9[138768]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:36 compute-1 sudo[138766]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:36 compute-1 ceph-mon[79643]: pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:36 compute-1 sudo[138918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndgrvnkjzolwzxzeizwqdgblyeuspile ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063696.6151152-774-265400720680321/AnsiballZ_stat.py'
Nov 25 09:41:36 compute-1 sudo[138918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:36 compute-1 python3.9[138920]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:36 compute-1 sudo[138918]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:37 compute-1 sudo[138996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olpiugbdgcvkvbmcrhlhuxcwnnudxdpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063696.6151152-774-265400720680321/AnsiballZ_file.py'
Nov 25 09:41:37 compute-1 sudo[138996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:37 compute-1 python3.9[138998]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:37 compute-1 sudo[138996]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:37.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:37 compute-1 sudo[139149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlqzwijzsyekschwclywyzkopfctsokm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063697.4393888-810-94165603494390/AnsiballZ_systemd.py'
Nov 25 09:41:37 compute-1 sudo[139149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:37 compute-1 python3.9[139151]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:41:37 compute-1 systemd[1]: Reloading.
Nov 25 09:41:37 compute-1 systemd-sysv-generator[139175]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:41:37 compute-1 systemd-rc-local-generator[139172]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:41:38 compute-1 sudo[139149]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:38.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:38 compute-1 sudo[139338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laiqpgozbfepmskgbdimndfktdhomegs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063698.3365817-834-260209336698263/AnsiballZ_stat.py'
Nov 25 09:41:38 compute-1 sudo[139338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:38 compute-1 ceph-mon[79643]: pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:41:38 compute-1 python3.9[139340]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:38 compute-1 sudo[139338]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:38 compute-1 sudo[139416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwusnncpmcamnmwukniywnelqzcuqxeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063698.3365817-834-260209336698263/AnsiballZ_file.py'
Nov 25 09:41:38 compute-1 sudo[139416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:39 compute-1 python3.9[139418]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:39 compute-1 sudo[139416]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:39 compute-1 sudo[139569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxcymxqcmorcupomuartiyokvnvlgtjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063699.236181-870-162109880618593/AnsiballZ_stat.py'
Nov 25 09:41:39 compute-1 sudo[139569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:39 compute-1 python3.9[139571]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:39 compute-1 sudo[139569]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:39.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:39 compute-1 sudo[139602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:41:39 compute-1 sudo[139602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:39 compute-1 sudo[139602]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:39 compute-1 sudo[139672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gywhugbskgpwokevdafccscseaoqhugr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063699.236181-870-162109880618593/AnsiballZ_file.py'
Nov 25 09:41:39 compute-1 sudo[139672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:39 compute-1 python3.9[139674]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:39 compute-1 sudo[139672]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:41:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:40.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:41:40 compute-1 sudo[139833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoggfkwuxgudpwvgiqfepzacmnudbrqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063700.0375218-906-243252521916373/AnsiballZ_systemd.py'
Nov 25 09:41:40 compute-1 sudo[139833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:40 compute-1 ovn_controller[133620]: 2025-11-25T09:41:40Z|00025|memory|INFO|16128 kB peak resident set size after 30.1 seconds
Nov 25 09:41:40 compute-1 ovn_controller[133620]: 2025-11-25T09:41:40Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Nov 25 09:41:40 compute-1 podman[139798]: 2025-11-25 09:41:40.269011841 +0000 UTC m=+0.065059850 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 09:41:40 compute-1 python3.9[139842]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:41:40 compute-1 systemd[1]: Reloading.
Nov 25 09:41:40 compute-1 systemd-rc-local-generator[139877]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:41:40 compute-1 systemd-sysv-generator[139882]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:41:40 compute-1 ceph-mon[79643]: pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:41:40 compute-1 systemd[1]: Starting Create netns directory...
Nov 25 09:41:40 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 09:41:40 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 09:41:40 compute-1 systemd[1]: Finished Create netns directory.
Nov 25 09:41:40 compute-1 sudo[139833]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:41 compute-1 sudo[140043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxfroypoignyrufutsiouvsbhmqqvcyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063701.0765667-936-92891122204772/AnsiballZ_file.py'
Nov 25 09:41:41 compute-1 sudo[140043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:41 compute-1 python3.9[140045]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:41 compute-1 sudo[140043]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:41.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:41:41 compute-1 sudo[140195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkmjipkcvvxyrensikwhchuhvjhqenle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063701.581036-960-90771873916344/AnsiballZ_stat.py'
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:41:41 compute-1 sudo[140195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:41 compute-1 python3.9[140211]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:41 compute-1 sudo[140195]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:42 compute-1 sudo[140334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfkhznpfyxbfqgjzzlrlnaddvdwxozet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063701.581036-960-90771873916344/AnsiballZ_copy.py'
Nov 25 09:41:42 compute-1 sudo[140334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:42.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:42 compute-1 python3.9[140336]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063701.581036-960-90771873916344/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:42 compute-1 sudo[140334]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:42 compute-1 ceph-mon[79643]: pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:41:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:42 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:42 compute-1 sudo[140486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjoudgndfqrlzbytirtagmqrvndrnpvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063702.6955998-1011-147231098535617/AnsiballZ_file.py'
Nov 25 09:41:42 compute-1 sudo[140486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:43 compute-1 python3.9[140488]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:43 compute-1 sudo[140486]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:43 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e40020a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:43 compute-1 sudo[140639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghhimgsmrfggyoganwybakhyglhycqge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063703.2298534-1035-77530627017214/AnsiballZ_stat.py'
Nov 25 09:41:43 compute-1 sudo[140639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:43 compute-1 python3.9[140641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:43 compute-1 sudo[140639]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:43.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:43 compute-1 sudo[140762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmxnuguwhgvmijpqwewuyenvdfhmzsdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063703.2298534-1035-77530627017214/AnsiballZ_copy.py'
Nov 25 09:41:43 compute-1 sudo[140762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:43 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:43 compute-1 python3.9[140764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063703.2298534-1035-77530627017214/.source.json _original_basename=.raz3p6yb follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:43 compute-1 sudo[140762]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:44.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:44 compute-1 sudo[140914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkhvstyvcnrgidtbcpzzvvhzlsrbtywd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063704.1297715-1080-136065603373406/AnsiballZ_file.py'
Nov 25 09:41:44 compute-1 sudo[140914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:44 compute-1 python3.9[140916]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:44 compute-1 sudo[140914]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094144 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:41:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:44 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:44 compute-1 ceph-mon[79643]: pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:41:44 compute-1 sudo[141066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsgqzkqufazvcfimjbviaoansduvvxpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063704.6426182-1104-113019313725740/AnsiballZ_stat.py'
Nov 25 09:41:44 compute-1 sudo[141066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:44 compute-1 sudo[141066]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:45 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:45 compute-1 sudo[141190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smxjrftlkggntmmfmowhylcruwoossry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063704.6426182-1104-113019313725740/AnsiballZ_copy.py'
Nov 25 09:41:45 compute-1 sudo[141190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:45 compute-1 sudo[141190]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:41:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:45.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:41:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:41:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:45 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4002ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:46 compute-1 sudo[141342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqdtovgyobjulxnifkxmbzysskyflmps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063705.7323108-1155-89408861756834/AnsiballZ_container_config_data.py'
Nov 25 09:41:46 compute-1 sudo[141342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:46 compute-1 python3.9[141344]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 25 09:41:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:46.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:46 compute-1 sudo[141342]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:46 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:46 compute-1 ceph-mon[79643]: pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:41:46 compute-1 sudo[141494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvxcwembvdhoxvwugcbqtvoqwvshomwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063706.4021704-1182-63432747930941/AnsiballZ_container_config_hash.py'
Nov 25 09:41:46 compute-1 sudo[141494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:46 compute-1 python3.9[141496]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 09:41:46 compute-1 sudo[141494]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:47 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:47 compute-1 sudo[141647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrxijivdwsrdwuexzvcdsnyxbfdhugpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063707.1051414-1209-177939542552706/AnsiballZ_podman_container_info.py'
Nov 25 09:41:47 compute-1 sudo[141647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:47 compute-1 python3.9[141649]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 09:41:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:47.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:47 compute-1 sudo[141647]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:47 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:48.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:48 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:48 compute-1 ceph-mon[79643]: pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:41:48 compute-1 sudo[141819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgjqgpivsnuvpjmoytqkzfcdjlitbdpo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764063708.528169-1248-229993405966893/AnsiballZ_edpm_container_manage.py'
Nov 25 09:41:48 compute-1 sudo[141819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:49 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:49 compute-1 python3[141821]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 09:41:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:49.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:49 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:50.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:50 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:50 compute-1 ceph-mon[79643]: pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:51 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:41:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:51.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:41:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:51 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:52.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:52 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec003660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:52 compute-1 ceph-mon[79643]: pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:53 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f00095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:53.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:53 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4004b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:54.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:54 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4004b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:54 compute-1 ceph-mon[79643]: pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:41:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:55 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec003660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:55.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:55 compute-1 ceph-mon[79643]: pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:41:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:55 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f00095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:56.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:56 compute-1 sudo[141892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:41:56 compute-1 sudo[141892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:56 compute-1 sudo[141892]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:56 compute-1 sudo[141917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 25 09:41:56 compute-1 sudo[141917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:56 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4004b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:57 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4004b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:57 compute-1 podman[141833]: 2025-11-25 09:41:57.510428606 +0000 UTC m=+8.310594430 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:41:57 compute-1 sudo[141917]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:57 compute-1 sudo[142003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:41:57 compute-1 sudo[142003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:57 compute-1 sudo[142003]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:57 compute-1 podman[142006]: 2025-11-25 09:41:57.616497417 +0000 UTC m=+0.037952330 container create 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:41:57 compute-1 podman[142006]: 2025-11-25 09:41:57.594818341 +0000 UTC m=+0.016273276 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:41:57 compute-1 python3[141821]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:41:57 compute-1 sudo[142037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:41:57 compute-1 sudo[142037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:57.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:57 compute-1 sudo[141819]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:57 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec003660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:58 compute-1 sudo[142037]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094158 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:41:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:58.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:58 compute-1 ceph-mon[79643]: pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:41:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:41:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:41:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:41:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:41:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:58 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f00095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:59 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:41:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:59.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:59 compute-1 sudo[142135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:41:59 compute-1 sudo[142135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:59 compute-1 sudo[142135]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:41:59 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:42:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:00.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:42:00 compute-1 sudo[142285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzemrwornzwwxsmwfwbigtrttpyumowa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063720.1656065-1272-50972027042489/AnsiballZ_stat.py'
Nov 25 09:42:00 compute-1 sudo[142285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:00 compute-1 python3.9[142287]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:42:00 compute-1 sudo[142285]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:00 compute-1 ceph-mon[79643]: pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:42:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:42:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:42:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:42:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:42:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:42:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:42:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:42:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:42:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:00 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec004760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:00 compute-1 sudo[142439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmhfaqrtlqhhovftoubowdhmdjmtjjya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063720.7890503-1299-259168220814891/AnsiballZ_file.py'
Nov 25 09:42:00 compute-1 sudo[142439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:01 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec004760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:01 compute-1 python3.9[142441]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:01 compute-1 sudo[142439]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:01 compute-1 sudo[142516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auqzzjfcafscpdlzerhdtefqfoxifake ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063720.7890503-1299-259168220814891/AnsiballZ_stat.py'
Nov 25 09:42:01 compute-1 sudo[142516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:01 compute-1 python3.9[142518]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:42:01 compute-1 sudo[142516]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:01.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:01 compute-1 sudo[142667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjpczjjqtgistifzceecfdxcpivvjlcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063721.4894776-1299-165369019657411/AnsiballZ_copy.py'
Nov 25 09:42:01 compute-1 sudo[142667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:01 compute-1 python3.9[142669]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063721.4894776-1299-165369019657411/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:01 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec004760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:01 compute-1 sudo[142667]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:02 compute-1 sudo[142743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrjchgsvuczbjqrrvllcxhqkznpsxmmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063721.4894776-1299-165369019657411/AnsiballZ_systemd.py'
Nov 25 09:42:02 compute-1 sudo[142743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:02.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:02 compute-1 python3.9[142745]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:42:02 compute-1 systemd[1]: Reloading.
Nov 25 09:42:02 compute-1 systemd-sysv-generator[142772]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:42:02 compute-1 systemd-rc-local-generator[142768]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:42:02 compute-1 sudo[142743]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:02 compute-1 ceph-mon[79643]: pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:42:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:02 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:02 compute-1 sudo[142854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwqmgcgrywcjjoablznrqjmxanexgnti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063721.4894776-1299-165369019657411/AnsiballZ_systemd.py'
Nov 25 09:42:02 compute-1 sudo[142854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:03 compute-1 python3.9[142856]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:03 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:03 compute-1 systemd[1]: Reloading.
Nov 25 09:42:03 compute-1 systemd-rc-local-generator[142905]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:42:03 compute-1 systemd-sysv-generator[142909]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:42:03 compute-1 sudo[142859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:42:03 compute-1 sudo[142859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:42:03 compute-1 sudo[142859]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:03 compute-1 systemd[1]: Starting ovn_metadata_agent container...
Nov 25 09:42:03 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:42:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e48a104c6ba2a5bae61f5b55baa76487af4fff8c388b56c5db0c8a95e780945/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e48a104c6ba2a5bae61f5b55baa76487af4fff8c388b56c5db0c8a95e780945/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:03 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069.
Nov 25 09:42:03 compute-1 podman[142923]: 2025-11-25 09:42:03.412460389 +0000 UTC m=+0.079857889 container init 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: + sudo -E kolla_set_configs
Nov 25 09:42:03 compute-1 podman[142923]: 2025-11-25 09:42:03.430839957 +0000 UTC m=+0.098237456 container start 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:42:03 compute-1 edpm-start-podman-container[142923]: ovn_metadata_agent
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: INFO:__main__:Validating config file
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: INFO:__main__:Copying service configuration files
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: INFO:__main__:Writing out command to execute
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: ++ cat /run_command
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: + CMD=neutron-ovn-metadata-agent
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: + ARGS=
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: + sudo kolla_copy_cacerts
Nov 25 09:42:03 compute-1 edpm-start-podman-container[142922]: Creating additional drop-in dependency for "ovn_metadata_agent" (8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069)
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: + [[ ! -n '' ]]
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: + . kolla_extend_start
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: Running command: 'neutron-ovn-metadata-agent'
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: + umask 0022
Nov 25 09:42:03 compute-1 ovn_metadata_agent[142935]: + exec neutron-ovn-metadata-agent
Nov 25 09:42:03 compute-1 systemd[1]: Reloading.
Nov 25 09:42:03 compute-1 podman[142942]: 2025-11-25 09:42:03.514062892 +0000 UTC m=+0.076017985 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 09:42:03 compute-1 systemd-rc-local-generator[143003]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:42:03 compute-1 systemd-sysv-generator[143007]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:42:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:03.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:03 compute-1 systemd[1]: Started ovn_metadata_agent container.
Nov 25 09:42:03 compute-1 sudo[142854]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:03 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec005860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:42:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:42:04 compute-1 ceph-mon[79643]: pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:42:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:04.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:04 compute-1 sshd-session[134225]: Connection closed by 192.168.122.30 port 38276
Nov 25 09:42:04 compute-1 sshd-session[134220]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:42:04 compute-1 systemd-logind[746]: Session 49 logged out. Waiting for processes to exit.
Nov 25 09:42:04 compute-1 systemd[1]: session-49.scope: Deactivated successfully.
Nov 25 09:42:04 compute-1 systemd[1]: session-49.scope: Consumed 39.517s CPU time.
Nov 25 09:42:04 compute-1 systemd-logind[746]: Removed session 49.
Nov 25 09:42:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:04 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec005860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.936 142940 INFO neutron.common.config [-] Logging enabled!
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.936 142940 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.936 142940 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.937 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.937 142940 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.937 142940 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.937 142940 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.937 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.938 142940 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.938 142940 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.938 142940 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.938 142940 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.938 142940 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.938 142940 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.938 142940 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.939 142940 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.939 142940 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.939 142940 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.939 142940 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.939 142940 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.939 142940 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.939 142940 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.940 142940 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.940 142940 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.940 142940 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.940 142940 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.940 142940 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.940 142940 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.940 142940 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.941 142940 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.941 142940 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.941 142940 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.941 142940 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.941 142940 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.941 142940 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.941 142940 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.942 142940 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.942 142940 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.942 142940 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.942 142940 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.942 142940 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.942 142940 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.943 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.943 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.943 142940 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.943 142940 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.943 142940 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.943 142940 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.943 142940 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.943 142940 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.944 142940 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.944 142940 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.944 142940 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.944 142940 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.944 142940 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.944 142940 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.944 142940 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.945 142940 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.945 142940 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.945 142940 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.945 142940 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.945 142940 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.945 142940 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.945 142940 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.946 142940 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.946 142940 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.946 142940 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.946 142940 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.946 142940 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.946 142940 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.947 142940 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.947 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.947 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.947 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.947 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.947 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.947 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.948 142940 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.948 142940 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.948 142940 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.948 142940 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.948 142940 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.948 142940 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.948 142940 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.949 142940 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.949 142940 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.949 142940 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.949 142940 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.949 142940 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.949 142940 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.949 142940 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.950 142940 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.950 142940 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.950 142940 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.950 142940 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.950 142940 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.950 142940 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.950 142940 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.950 142940 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.950 142940 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.951 142940 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.951 142940 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.951 142940 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.951 142940 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.951 142940 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.951 142940 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.951 142940 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.952 142940 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.952 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.952 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.952 142940 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.952 142940 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.952 142940 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.952 142940 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.952 142940 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.953 142940 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.953 142940 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.953 142940 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.953 142940 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.953 142940 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.953 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.953 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.954 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.954 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.954 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.954 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.954 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.954 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.954 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.954 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.955 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.955 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.955 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.955 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.955 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.955 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.955 142940 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.956 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.956 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.956 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.956 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.956 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.956 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.956 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.956 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.957 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.957 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.957 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.957 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.957 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.957 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.957 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.958 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.958 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.958 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.958 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.958 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.958 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.958 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.958 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.959 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.959 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.959 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.959 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.959 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.959 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.959 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.960 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.960 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.960 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.960 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.960 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.960 142940 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.960 142940 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.961 142940 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.961 142940 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.961 142940 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.961 142940 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.961 142940 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.961 142940 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.961 142940 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.962 142940 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.962 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.962 142940 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.962 142940 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.962 142940 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.962 142940 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.963 142940 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.963 142940 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.963 142940 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.963 142940 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.963 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.963 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.963 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.964 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.964 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.964 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.964 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.964 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.964 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.964 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.965 142940 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.965 142940 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.965 142940 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.965 142940 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.965 142940 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.965 142940 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.965 142940 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.966 142940 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.966 142940 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.966 142940 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.966 142940 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.966 142940 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.966 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.966 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.966 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.967 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.967 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.967 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.967 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.967 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.967 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.967 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.968 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.968 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.968 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.968 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.968 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.968 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.968 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.969 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.969 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.969 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.969 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.969 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.969 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.969 142940 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.970 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.970 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.970 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.970 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.970 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.970 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.970 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.971 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.971 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.971 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.971 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.971 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.971 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.972 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.972 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.972 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.972 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.972 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.972 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.972 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.973 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.973 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.973 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.973 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.973 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.973 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.973 142940 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.974 142940 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.974 142940 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.974 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.974 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.974 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.974 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.974 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.975 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.975 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.975 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.975 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.975 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.975 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.975 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.976 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.976 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.976 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.976 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.976 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.976 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.976 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.977 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.977 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.977 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.977 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.977 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.977 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.977 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.978 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.978 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.978 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.978 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.978 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.978 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.978 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.979 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.979 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.979 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.979 142940 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.979 142940 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.987 142940 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.987 142940 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.987 142940 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.987 142940 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 25 09:42:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:04.988 142940 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.000 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name ad0cdb86-b3c6-44c6-a890-1db2efa57d2b (UUID: ad0cdb86-b3c6-44c6-a890-1db2efa57d2b) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.016 142940 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.017 142940 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.017 142940 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.017 142940 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.019 142940 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.024 142940 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.028 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'ad0cdb86-b3c6-44c6-a890-1db2efa57d2b'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>], external_ids={}, name=ad0cdb86-b3c6-44c6-a890-1db2efa57d2b, nb_cfg_timestamp=1764063678152, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.029 142940 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fcf55707f40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.029 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.030 142940 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.030 142940 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.030 142940 INFO oslo_service.service [-] Starting 1 workers
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.034 142940 DEBUG oslo_service.service [-] Started child 143041 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.036 143041 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-946396'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.036 142940 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmps0zlpjyr/privsep.sock']
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.053 143041 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.053 143041 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.053 143041 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.055 143041 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.061 143041 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.066 143041 INFO eventlet.wsgi.server [-] (143041) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 25 09:42:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:05 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:05 compute-1 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.562 142940 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.563 142940 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmps0zlpjyr/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.488 143047 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.492 143047 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.494 143047 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.494 143047 INFO oslo.privsep.daemon [-] privsep daemon running as pid 143047
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.565 143047 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d6c30d-0727-42bd-a4d0-0173ce75adbc]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:42:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:05.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:05 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.956 143047 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.957 143047 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:42:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:05.957 143047 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:42:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:06.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:06 compute-1 ceph-mon[79643]: pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.382 143047 DEBUG oslo.privsep.daemon [-] privsep: reply[0716274a-3e8f-42e2-a601-fc6209578d0e]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.384 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=ad0cdb86-b3c6-44c6-a890-1db2efa57d2b, column=external_ids, values=({'neutron:ovn-metadata-id': '24a0e59c-667a-5aa7-8121-4d4e6c879562'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.391 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ad0cdb86-b3c6-44c6-a890-1db2efa57d2b, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.397 142940 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.398 142940 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.398 142940 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.398 142940 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.398 142940 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.398 142940 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.398 142940 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.398 142940 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.398 142940 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.398 142940 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.398 142940 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.399 142940 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.399 142940 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.399 142940 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.399 142940 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.399 142940 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.399 142940 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.399 142940 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.399 142940 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.399 142940 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.400 142940 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.400 142940 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.400 142940 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.400 142940 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.400 142940 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.400 142940 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.400 142940 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.400 142940 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.401 142940 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.401 142940 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.401 142940 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.401 142940 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.401 142940 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.401 142940 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.401 142940 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.401 142940 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.401 142940 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.402 142940 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.402 142940 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.402 142940 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.402 142940 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.402 142940 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.402 142940 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.402 142940 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.402 142940 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.403 142940 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.403 142940 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.403 142940 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.403 142940 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.403 142940 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.403 142940 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.403 142940 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.403 142940 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.403 142940 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.403 142940 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.404 142940 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.404 142940 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.404 142940 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.404 142940 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.404 142940 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.404 142940 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.404 142940 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.404 142940 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.404 142940 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.405 142940 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.405 142940 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.405 142940 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.405 142940 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.405 142940 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.405 142940 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.405 142940 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.405 142940 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.405 142940 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.406 142940 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.406 142940 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.406 142940 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.406 142940 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.406 142940 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.406 142940 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.406 142940 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.406 142940 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.406 142940 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.406 142940 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.407 142940 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.407 142940 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.407 142940 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.407 142940 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.407 142940 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.407 142940 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.407 142940 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.407 142940 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.407 142940 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.407 142940 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.408 142940 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.408 142940 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.408 142940 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.408 142940 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.408 142940 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.408 142940 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.408 142940 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.408 142940 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.408 142940 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.408 142940 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.409 142940 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.409 142940 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.409 142940 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.409 142940 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.409 142940 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.409 142940 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.409 142940 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.409 142940 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.410 142940 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.410 142940 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.410 142940 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.410 142940 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.410 142940 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.410 142940 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.410 142940 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.410 142940 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.410 142940 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.410 142940 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.411 142940 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.411 142940 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.411 142940 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.411 142940 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.411 142940 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.411 142940 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.411 142940 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.411 142940 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.411 142940 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.412 142940 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.412 142940 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.412 142940 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.412 142940 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.412 142940 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.412 142940 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.412 142940 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.412 142940 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.412 142940 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.413 142940 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.413 142940 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.413 142940 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.413 142940 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.413 142940 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.413 142940 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.413 142940 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.413 142940 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.413 142940 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.414 142940 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.414 142940 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.414 142940 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.414 142940 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.414 142940 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.414 142940 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.414 142940 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.414 142940 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.414 142940 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.414 142940 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.414 142940 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.415 142940 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.415 142940 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.415 142940 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.415 142940 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.415 142940 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.415 142940 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.415 142940 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.415 142940 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.415 142940 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.415 142940 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.415 142940 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.416 142940 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.416 142940 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.416 142940 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.416 142940 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.416 142940 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.416 142940 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.416 142940 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.416 142940 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.416 142940 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.417 142940 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.417 142940 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.417 142940 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.417 142940 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.417 142940 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.417 142940 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.417 142940 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.417 142940 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.417 142940 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.418 142940 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.418 142940 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.418 142940 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.418 142940 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.418 142940 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.418 142940 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.418 142940 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.418 142940 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.418 142940 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.418 142940 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.419 142940 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.419 142940 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.419 142940 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.419 142940 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.419 142940 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.419 142940 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.419 142940 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.419 142940 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.419 142940 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.419 142940 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.420 142940 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.420 142940 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.420 142940 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.420 142940 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.420 142940 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.420 142940 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.420 142940 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.420 142940 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.420 142940 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.420 142940 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.421 142940 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.421 142940 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.421 142940 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.421 142940 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.421 142940 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.421 142940 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.421 142940 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.421 142940 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.421 142940 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.421 142940 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.421 142940 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.422 142940 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.422 142940 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.422 142940 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.422 142940 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.422 142940 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.422 142940 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.422 142940 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.422 142940 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.422 142940 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.422 142940 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.423 142940 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.423 142940 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.423 142940 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.423 142940 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.423 142940 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.423 142940 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.423 142940 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.423 142940 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.423 142940 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.424 142940 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.424 142940 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.424 142940 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.424 142940 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.424 142940 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.424 142940 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.424 142940 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.424 142940 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.424 142940 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.424 142940 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.425 142940 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.425 142940 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.425 142940 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.425 142940 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.425 142940 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.425 142940 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.425 142940 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.425 142940 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.425 142940 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.425 142940 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.426 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.426 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.426 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.426 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.426 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.426 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.426 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.426 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.426 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.427 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.427 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.427 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.427 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.427 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.427 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.427 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.427 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.427 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.428 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.428 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.428 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.428 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.428 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.428 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.428 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.428 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.428 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.428 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.429 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.429 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.429 142940 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.429 142940 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.429 142940 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.429 142940 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.429 142940 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:42:06.429 142940 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 09:42:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:06 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:07 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:42:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:07.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:42:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:07 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:42:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:08.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:42:08 compute-1 ceph-mon[79643]: pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:42:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:08 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:09 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:09.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:09 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:10 compute-1 sshd-session[143054]: Accepted publickey for zuul from 192.168.122.30 port 33130 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:42:10 compute-1 systemd-logind[746]: New session 50 of user zuul.
Nov 25 09:42:10 compute-1 systemd[1]: Started Session 50 of User zuul.
Nov 25 09:42:10 compute-1 sshd-session[143054]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:42:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:10.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:10 compute-1 ceph-mon[79643]: pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:42:10 compute-1 podman[143181]: 2025-11-25 09:42:10.664902241 +0000 UTC m=+0.058024467 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:42:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:10 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:10 compute-1 python3.9[143216]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:42:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:11 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:11.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:11 compute-1 sudo[143385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfelnclxabececdnzjzwfaecbqipaclx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063731.3765976-63-46128867406885/AnsiballZ_command.py'
Nov 25 09:42:11 compute-1 sudo[143385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:11 compute-1 python3.9[143387]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:11 compute-1 sudo[143385]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:11 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:12 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:42:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:12.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:12 compute-1 ceph-mon[79643]: pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:12 compute-1 sudo[143546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyplnrxcqraloudmhhwzwftbyyqzgdgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063732.2369637-96-202968697111854/AnsiballZ_systemd_service.py'
Nov 25 09:42:12 compute-1 sudo[143546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:12 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:12 compute-1 python3.9[143548]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:42:12 compute-1 systemd[1]: Reloading.
Nov 25 09:42:12 compute-1 systemd-rc-local-generator[143568]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:42:13 compute-1 systemd-sysv-generator[143572]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:42:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:13 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:13 compute-1 sudo[143546]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:13.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:13 compute-1 python3.9[143734]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:42:13 compute-1 network[143751]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:42:13 compute-1 network[143752]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:42:13 compute-1 network[143753]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:42:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:13 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:42:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:14.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:42:14 compute-1 ceph-mon[79643]: pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:14 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83fc002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:15 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:42:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:15 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:42:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:15 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:42:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:15.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:15 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:16.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:16 compute-1 ceph-mon[79643]: pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:16 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:17 compute-1 sudo[144014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gipfmluwferugqxkduvutnjruxuiggea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063736.8623128-153-212107313232015/AnsiballZ_systemd_service.py'
Nov 25 09:42:17 compute-1 sudo[144014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:17 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83fc003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:17 compute-1 python3.9[144016]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:17 compute-1 sudo[144014]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:17 compute-1 sudo[144168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ragxbsohikvbmwwfmittequwggtxyxmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063737.4033782-153-122547281822910/AnsiballZ_systemd_service.py'
Nov 25 09:42:17 compute-1 sudo[144168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:17.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:17 compute-1 python3.9[144170]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:17 compute-1 sudo[144168]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:17 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:18 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:42:18 compute-1 sudo[144321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivtglftfbbiujeoyvjcudfwrjinjkasv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063737.9459894-153-89794343292850/AnsiballZ_systemd_service.py'
Nov 25 09:42:18 compute-1 sudo[144321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:18.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:18 compute-1 python3.9[144323]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:18 compute-1 sudo[144321]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:18 compute-1 ceph-mon[79643]: pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:18 compute-1 sudo[144474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdzrdasnzpyylbsxhfcyyfpcbxicjsdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063738.4881065-153-260640876774847/AnsiballZ_systemd_service.py'
Nov 25 09:42:18 compute-1 sudo[144474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:18 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f00098c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:18 compute-1 python3.9[144476]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:18 compute-1 sudo[144474]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:19 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:19 compute-1 sudo[144628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgflitulvaneilumzeedekissxfnyrvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063739.0381317-153-253370363540329/AnsiballZ_systemd_service.py'
Nov 25 09:42:19 compute-1 sudo[144628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:19 compute-1 python3.9[144630]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:19 compute-1 sudo[144628]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:19.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:19 compute-1 sudo[144781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvxtiosemjoeuxsupnaiqmmyshjspowe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063739.587178-153-261053598730421/AnsiballZ_systemd_service.py'
Nov 25 09:42:19 compute-1 sudo[144781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:19 compute-1 sudo[144784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:42:19 compute-1 sudo[144784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:42:19 compute-1 sudo[144784]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:19 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83fc003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:20 compute-1 python3.9[144783]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:20 compute-1 sudo[144781]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:20.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:20 compute-1 sudo[144959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgoojmpwrmasjsnbjsuzibhsholgyrpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063740.1302023-153-64270193247079/AnsiballZ_systemd_service.py'
Nov 25 09:42:20 compute-1 sudo[144959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:20 compute-1 ceph-mon[79643]: pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:20 compute-1 python3.9[144961]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:20 compute-1 sudo[144959]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:20 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:21 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f00098e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:21.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:21 compute-1 sudo[145113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyljneauayjhaouuiapntxnvbksozqjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063741.4894075-309-199257622320772/AnsiballZ_file.py'
Nov 25 09:42:21 compute-1 sudo[145113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:21 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:21 compute-1 python3.9[145115]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:21 compute-1 sudo[145113]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:22 compute-1 sudo[145265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srthjgykkyhkwpthvqnecuutjjhadfsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063742.0619962-309-217200590901583/AnsiballZ_file.py'
Nov 25 09:42:22 compute-1 sudo[145265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:22.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:22 compute-1 python3.9[145267]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:22 compute-1 sudo[145265]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:22 compute-1 ceph-mon[79643]: pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:42:22 compute-1 sudo[145417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekhbjyzmhewmqfrtyjjtreucjvulxrrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063742.5068595-309-207293042772915/AnsiballZ_file.py'
Nov 25 09:42:22 compute-1 sudo[145417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:22 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83fc003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:22 compute-1 python3.9[145419]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:22 compute-1 sudo[145417]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:23 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:23 compute-1 sudo[145569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbjqdlgameqngjmjbbqaqixgkxxgbxgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063742.9421926-309-133006924662258/AnsiballZ_file.py'
Nov 25 09:42:23 compute-1 sudo[145569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:23 compute-1 python3.9[145571]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:23 compute-1 sudo[145569]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094223 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:42:23 compute-1 sudo[145722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqwdbnthlojsptbfrnmmywrrxgbltfmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063743.3725493-309-266958525470318/AnsiballZ_file.py'
Nov 25 09:42:23 compute-1 sudo[145722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:23.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:23 compute-1 python3.9[145724]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:23 compute-1 sudo[145722]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:23 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:24 compute-1 sudo[145874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwgjejesjwxdkrvvbfaqtgvbqhdrwwbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063743.815878-309-237157276172705/AnsiballZ_file.py'
Nov 25 09:42:24 compute-1 sudo[145874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094224 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:42:24 compute-1 python3.9[145876]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:24 compute-1 sudo[145874]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:24.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:24 compute-1 ceph-mon[79643]: pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:42:24 compute-1 sudo[146026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vubzkoxcacacyheueapfgzjnqxyumqeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063744.2780523-309-201780304871352/AnsiballZ_file.py'
Nov 25 09:42:24 compute-1 sudo[146026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:24 compute-1 python3.9[146028]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:24 compute-1 sudo[146026]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:24 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:24 compute-1 sudo[146179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uodkhonkrvfldhgtfelnfkhqezitslbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063744.7641003-459-108809449314095/AnsiballZ_file.py'
Nov 25 09:42:24 compute-1 sudo[146179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:25 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:25 compute-1 python3.9[146181]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:25 compute-1 sudo[146179]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:25 compute-1 sudo[146332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnitxaehvyohcejgjvxobglmbiwxxcne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063745.194962-459-220041574854128/AnsiballZ_file.py'
Nov 25 09:42:25 compute-1 sudo[146332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:25 compute-1 python3.9[146334]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:25 compute-1 sudo[146332]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:25.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:25 compute-1 sudo[146484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsqphiaarwivozlmdfcxamjtlftdqghm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063745.641885-459-194346362310065/AnsiballZ_file.py'
Nov 25 09:42:25 compute-1 sudo[146484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:25 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8404003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:25 compute-1 python3.9[146486]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:25 compute-1 sudo[146484]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:26.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:26 compute-1 sudo[146636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkydwnsfirqfgfvpklbxoceoaocdlfdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063746.0793223-459-95134811027169/AnsiballZ_file.py'
Nov 25 09:42:26 compute-1 sudo[146636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:26 compute-1 python3.9[146638]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:26 compute-1 sudo[146636]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:26 compute-1 ceph-mon[79643]: pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:42:26 compute-1 sudo[146788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivbkebhmcrpmxiojbyfxanwnriptejct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063746.5129068-459-134755343155235/AnsiballZ_file.py'
Nov 25 09:42:26 compute-1 sudo[146788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:26 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f00099b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:26 compute-1 python3.9[146790]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:26 compute-1 sudo[146788]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:27 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:27 compute-1 sudo[146940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjjgtjjzgjgacfjgqeghdvbcamfqtifh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063746.9307976-459-25719281323044/AnsiballZ_file.py'
Nov 25 09:42:27 compute-1 sudo[146940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:27 compute-1 python3.9[146942]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:27 compute-1 sudo[146940]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:27 compute-1 sudo[147093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmscdsqimvkeppikzcqtckwesxqblfin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063747.3667977-459-26335142900003/AnsiballZ_file.py'
Nov 25 09:42:27 compute-1 sudo[147093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:27.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:27 compute-1 python3.9[147095]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:27 compute-1 sudo[147093]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:27 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:28 compute-1 sudo[147245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsjjoaacsxbopschapppvwsmhbbehjqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063747.9738963-612-66590968731890/AnsiballZ_command.py'
Nov 25 09:42:28 compute-1 sudo[147245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:28.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:28 compute-1 ceph-mon[79643]: pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:42:28 compute-1 python3.9[147247]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:28 compute-1 sudo[147245]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:28 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8404004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:42:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 6386 writes, 26K keys, 6386 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6386 writes, 1197 syncs, 5.34 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6386 writes, 26K keys, 6386 commit groups, 1.0 writes per commit group, ingest: 19.54 MB, 0.03 MB/s
                                           Interval WAL: 6386 writes, 1197 syncs, 5.34 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 09:42:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:29 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f00099d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:29 compute-1 python3.9[147399]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 09:42:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:29.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:29 compute-1 sudo[147550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfjvlzfudkngeuyfaujsnlbholsuadbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063749.5156913-666-150390004706688/AnsiballZ_systemd_service.py'
Nov 25 09:42:29 compute-1 sudo[147550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:29 compute-1 python3.9[147552]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:42:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:29 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:29 compute-1 systemd[1]: Reloading.
Nov 25 09:42:30 compute-1 systemd-sysv-generator[147576]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:42:30 compute-1 systemd-rc-local-generator[147573]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:42:30 compute-1 sudo[147550]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:30.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:30 compute-1 ceph-mon[79643]: pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:42:30 compute-1 sudo[147737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzlnykkhlvtdubszphpzxdanqgxtkrzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063750.450804-690-3132233503229/AnsiballZ_command.py'
Nov 25 09:42:30 compute-1 sudo[147737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:30 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:30 compute-1 python3.9[147739]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:30 compute-1 sudo[147737]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:31 compute-1 sudo[147890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxscpoghcrnuwmqyujitkxnlwnbaomtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063750.9092877-690-37795162284253/AnsiballZ_command.py'
Nov 25 09:42:31 compute-1 sudo[147890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:31 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8404004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:31 compute-1 python3.9[147892]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:31 compute-1 sudo[147890]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:31 compute-1 sudo[148044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akqgcwgxuwdtkhspgdompcyjfwnlgeuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063751.356005-690-242736024281692/AnsiballZ_command.py'
Nov 25 09:42:31 compute-1 sudo[148044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:31 compute-1 python3.9[148046]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:31.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:31 compute-1 sudo[148044]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:31 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f00099f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:31 compute-1 sudo[148197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqdeuplepigyivppvuesqabphucbrdcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063751.7882102-690-248702969258416/AnsiballZ_command.py'
Nov 25 09:42:31 compute-1 sudo[148197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:32 compute-1 python3.9[148199]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:32 compute-1 sudo[148197]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:32.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:32 compute-1 sudo[148350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbpqsyfzytgzyhpvwbyzkgusneqyruyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063752.222302-690-132286806414655/AnsiballZ_command.py'
Nov 25 09:42:32 compute-1 sudo[148350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:32 compute-1 ceph-mon[79643]: pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 25 09:42:32 compute-1 python3.9[148352]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:32 compute-1 sudo[148350]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:32 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:32 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:42:32 compute-1 sudo[148503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svfyfduuhdfxevpfbhzfslqtrrgeotry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063752.7065616-690-3499750863774/AnsiballZ_command.py'
Nov 25 09:42:32 compute-1 sudo[148503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:33 compute-1 python3.9[148505]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:33 compute-1 sudo[148503]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:33 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:33 compute-1 sudo[148657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leyyujodsayqubezbeqicujibqdvtfgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063753.1321185-690-244264943586642/AnsiballZ_command.py'
Nov 25 09:42:33 compute-1 sudo[148657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:33 compute-1 python3.9[148659]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:33 compute-1 sudo[148657]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:33.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:33 compute-1 podman[148685]: 2025-11-25 09:42:33.785044536 +0000 UTC m=+0.038393318 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 09:42:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:33 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:34.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:34 compute-1 ceph-mon[79643]: pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:34 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f00099f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:34 compute-1 sudo[148826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqtajiopwhyjiclqegflvmbtkfqrvueq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063754.573344-852-108597803054069/AnsiballZ_getent.py'
Nov 25 09:42:34 compute-1 sudo[148826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:34 compute-1 python3.9[148828]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 25 09:42:35 compute-1 sudo[148826]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:35 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040053e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:35 compute-1 sudo[148980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nysiemiddtheuybzbkzqzuojpybdgykl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063755.2077382-876-13288066885251/AnsiballZ_group.py'
Nov 25 09:42:35 compute-1 sudo[148980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:35 compute-1 python3.9[148982]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 09:42:35 compute-1 groupadd[148983]: group added to /etc/group: name=libvirt, GID=42473
Nov 25 09:42:35 compute-1 groupadd[148983]: group added to /etc/gshadow: name=libvirt
Nov 25 09:42:35 compute-1 groupadd[148983]: new group: name=libvirt, GID=42473
Nov 25 09:42:35 compute-1 sudo[148980]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:35.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:35 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:42:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:35 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:42:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:35 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:36.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:36 compute-1 sudo[149138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drlrftbpdpzrfbdmftdjctvydwabaopn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063755.918363-900-227869730204278/AnsiballZ_user.py'
Nov 25 09:42:36 compute-1 sudo[149138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:36 compute-1 python3.9[149140]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 09:42:36 compute-1 useradd[149142]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 09:42:36 compute-1 rsyslogd[963]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:42:36 compute-1 rsyslogd[963]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:42:36 compute-1 ceph-mon[79643]: pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:36 compute-1 sudo[149138]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:36 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:37 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009a10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:37 compute-1 sudo[149299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzrielsqclmkkyuukxlgommdjyqyxyln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063756.9248986-933-79555704569129/AnsiballZ_setup.py'
Nov 25 09:42:37 compute-1 sudo[149299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:37 compute-1 python3.9[149301]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:42:37 compute-1 sudo[149299]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:42:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:37.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:42:37 compute-1 sudo[149384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgrjeqfyigzgigqcncxrdjwtrjfleffi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063756.9248986-933-79555704569129/AnsiballZ_dnf.py'
Nov 25 09:42:37 compute-1 sudo[149384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:37 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040053e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:38 compute-1 python3.9[149386]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:42:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:38.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:38 compute-1 ceph-mon[79643]: pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:38 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:38 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:42:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:39 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:39.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:39 compute-1 sudo[149395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:42:39 compute-1 sudo[149395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:42:39 compute-1 sudo[149395]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:39 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:40.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:40 compute-1 ceph-mon[79643]: pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:40 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040053e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:40 compute-1 podman[149424]: 2025-11-25 09:42:40.832370329 +0000 UTC m=+0.086588655 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 25 09:42:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:41.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:42.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:42 compute-1 ceph-mon[79643]: pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:42:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:42 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:43 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:43.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:43 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:44.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:44 compute-1 ceph-mon[79643]: pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:44 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:45 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094245 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:42:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:42:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:42:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:45.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:42:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:45 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:46.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:46 compute-1 ceph-mon[79643]: pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:46 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:47 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:47.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:47 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:42:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:48.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:42:48 compute-1 ceph-mon[79643]: pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:48 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:49 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:49.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:49 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:50.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:50 compute-1 ceph-mon[79643]: pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:50 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec006e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:51 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:51.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:51 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec007ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:42:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:52.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:42:52 compute-1 ceph-mon[79643]: pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:52 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:53 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec007ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:53.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:53 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:54.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:54 compute-1 ceph-mon[79643]: pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:42:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:54 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec007ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:55 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:42:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:55.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:42:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:55 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec007ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:56.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:56 compute-1 ceph-mon[79643]: pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:42:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:56 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:57 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:42:57 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2506 writes, 14K keys, 2506 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.06 MB/s
                                           Cumulative WAL: 2506 writes, 2506 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2506 writes, 14K keys, 2506 commit groups, 1.0 writes per commit group, ingest: 38.71 MB, 0.06 MB/s
                                           Interval WAL: 2506 writes, 2506 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    432.4      0.05              0.03         6    0.008       0      0       0.0       0.0
                                             L6      1/0   11.17 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.9    492.0    423.4      0.14              0.09         5    0.028     19K   2261       0.0       0.0
                                            Sum      1/0   11.17 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9    366.8    425.7      0.19              0.12        11    0.017     19K   2261       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9    368.4    427.5      0.19              0.12        10    0.019     19K   2261       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    492.0    423.4      0.14              0.09         5    0.028     19K   2261       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    439.9      0.05              0.03         5    0.009       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.020, interval 0.020
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633d9fc7350#2 capacity: 304.00 MB usage: 2.80 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(159,2.60 MB,0.856676%) FilterBlock(11,63.92 KB,0.0205341%) IndexBlock(11,131.53 KB,0.0422528%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 09:42:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:57 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:57.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:57 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:58.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:58 compute-1 ceph-mon[79643]: pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:42:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:58 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e40030d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:59 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:42:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:59.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:59 compute-1 sudo[149639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:42:59 compute-1 sudo[149639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:42:59 compute-1 sudo[149639]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:42:59 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:00.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:00 compute-1 ceph-mon[79643]: pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:43:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:00 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:01 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e40030d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:01.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:01 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:02.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:02 compute-1 kernel: SELinux:  Converting 2772 SID table entries...
Nov 25 09:43:02 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:43:02 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:43:02 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:43:02 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:43:02 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:43:02 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:43:02 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:43:02 compute-1 ceph-mon[79643]: pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:43:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:02 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:03 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:03 compute-1 sudo[149674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:43:03 compute-1 dbus-broker-launch[736]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 25 09:43:03 compute-1 sudo[149674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:03 compute-1 sudo[149674]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:03 compute-1 sudo[149699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:43:03 compute-1 sudo[149699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:43:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:03.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:43:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:03 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e40030d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:04 compute-1 sudo[149699]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:04.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:04 compute-1 ceph-mon[79643]: pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:43:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:43:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:43:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:43:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:43:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:04 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:04 compute-1 podman[149753]: 2025-11-25 09:43:04.79599986 +0000 UTC m=+0.044008800 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 25 09:43:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:43:04.989 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:43:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:43:04.990 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:43:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:43:04.990 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:43:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:05 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:05.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:05 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:06.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:06 compute-1 ceph-mon[79643]: pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:06 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:07 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:07.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:07 compute-1 sudo[149771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:43:07 compute-1 sudo[149771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:07 compute-1 sudo[149771]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:07 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:43:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:08.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:43:08 compute-1 ceph-mon[79643]: pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:43:08 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:08 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:08 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:09 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:09 compute-1 kernel: SELinux:  Converting 2772 SID table entries...
Nov 25 09:43:09 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:43:09 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:43:09 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:43:09 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:43:09 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:43:09 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:43:09 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:43:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:09.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:09 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:43:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:10.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:43:10 compute-1 ceph-mon[79643]: pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:10 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:11 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:11 compute-1 dbus-broker-launch[736]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 25 09:43:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:11.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:11 compute-1 podman[149805]: 2025-11-25 09:43:11.812983628 +0000 UTC m=+0.066142259 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 09:43:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:11 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:12.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:12 compute-1 ceph-mon[79643]: pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:43:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:12 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:13 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:13.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:13 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:14.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:14 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:14 compute-1 ceph-mon[79643]: pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:15 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:15.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:43:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:15 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:43:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:16.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:43:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:16 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:16 compute-1 ceph-mon[79643]: pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:17 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:17.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:17 compute-1 ceph-mon[79643]: pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:43:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:17 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:18.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:18 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:19 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408004b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:19.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:19 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:20 compute-1 sudo[151396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:43:20 compute-1 sudo[151396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:20 compute-1 sudo[151396]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:43:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:20.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:43:20 compute-1 ceph-mon[79643]: pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:20 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:21 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:21.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:21 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:22.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:22 compute-1 ceph-mon[79643]: pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:43:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:22 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:23 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:23.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:23 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:24.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:24 compute-1 ceph-mon[79643]: pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:24 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:25 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0009e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:25.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:25 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:26.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:26 compute-1 ceph-mon[79643]: pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:26 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:27 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:27.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:27 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:28.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:28 compute-1 ceph-mon[79643]: pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:43:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:28 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100bf2f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:29 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:29.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:29 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:30.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:30 compute-1 ceph-mon[79643]: pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:43:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:30 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:31 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100bfe30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:31.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:31 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:32.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:32 compute-1 ceph-mon[79643]: pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:43:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:32 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84040064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:33 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:33.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:33 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100bfe30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:34.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:34 compute-1 ceph-mon[79643]: pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:34 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:35 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8404006e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:35.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:35 compute-1 podman[166654]: 2025-11-25 09:43:35.781967922 +0000 UTC m=+0.038857161 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 25 09:43:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:36 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:36.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:36 compute-1 ceph-mon[79643]: pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:36 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:37 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:37.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:38 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8404006e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:38.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:38 compute-1 ceph-mon[79643]: pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:43:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:38 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:39 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:43:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:39.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:43:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:40 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:40 compute-1 sudo[166684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:43:40 compute-1 sudo[166684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:40 compute-1 sudo[166684]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:40.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:40 compute-1 ceph-mon[79643]: pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:40 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8404006e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:41.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:42 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:43:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:42.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:43:42 compute-1 ceph-mon[79643]: pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 09:43:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:42 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:42 compute-1 podman[166710]: 2025-11-25 09:43:42.828170446 +0000 UTC m=+0.081856898 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 09:43:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:43 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8404006e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:43.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:44 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:44.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:44 compute-1 ceph-mon[79643]: pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 25 09:43:44 compute-1 kernel: SELinux:  Converting 2773 SID table entries...
Nov 25 09:43:44 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:43:44 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:43:44 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:43:44 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:43:44 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:43:44 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:43:44 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:43:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:44 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:45 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:45 compute-1 groupadd[166745]: group added to /etc/group: name=dnsmasq, GID=992
Nov 25 09:43:45 compute-1 groupadd[166745]: group added to /etc/gshadow: name=dnsmasq
Nov 25 09:43:45 compute-1 groupadd[166745]: new group: name=dnsmasq, GID=992
Nov 25 09:43:45 compute-1 useradd[166753]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 25 09:43:45 compute-1 dbus-broker-launch[726]: Noticed file-system modification, trigger reload.
Nov 25 09:43:45 compute-1 dbus-broker-launch[736]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 25 09:43:45 compute-1 dbus-broker-launch[726]: Noticed file-system modification, trigger reload.
Nov 25 09:43:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:43:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:45.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:45 compute-1 groupadd[166766]: group added to /etc/group: name=clevis, GID=991
Nov 25 09:43:45 compute-1 groupadd[166766]: group added to /etc/gshadow: name=clevis
Nov 25 09:43:45 compute-1 groupadd[166766]: new group: name=clevis, GID=991
Nov 25 09:43:45 compute-1 useradd[166773]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 25 09:43:45 compute-1 usermod[166783]: add 'clevis' to group 'tss'
Nov 25 09:43:45 compute-1 usermod[166783]: add 'clevis' to shadow group 'tss'
Nov 25 09:43:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:46 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8404006e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:46.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:46 compute-1 ceph-mon[79643]: pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 25 09:43:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:46 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:47 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:47 compute-1 polkitd[43350]: Reloading rules
Nov 25 09:43:47 compute-1 polkitd[43350]: Collecting garbage unconditionally...
Nov 25 09:43:47 compute-1 polkitd[43350]: Loading rules from directory /etc/polkit-1/rules.d
Nov 25 09:43:47 compute-1 polkitd[43350]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 25 09:43:47 compute-1 polkitd[43350]: Finished loading, compiling and executing 3 rules
Nov 25 09:43:47 compute-1 polkitd[43350]: Reloading rules
Nov 25 09:43:47 compute-1 polkitd[43350]: Collecting garbage unconditionally...
Nov 25 09:43:47 compute-1 polkitd[43350]: Loading rules from directory /etc/polkit-1/rules.d
Nov 25 09:43:47 compute-1 polkitd[43350]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 25 09:43:47 compute-1 polkitd[43350]: Finished loading, compiling and executing 3 rules
Nov 25 09:43:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:47.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:48 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:48 compute-1 groupadd[166971]: group added to /etc/group: name=ceph, GID=167
Nov 25 09:43:48 compute-1 groupadd[166971]: group added to /etc/gshadow: name=ceph
Nov 25 09:43:48 compute-1 groupadd[166971]: new group: name=ceph, GID=167
Nov 25 09:43:48 compute-1 useradd[166977]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 25 09:43:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:43:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:48.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:43:48 compute-1 ceph-mon[79643]: pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 09:43:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:48 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8404006e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:49 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:49.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:50 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:50.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:50 compute-1 sshd[964]: Received signal 15; terminating.
Nov 25 09:43:50 compute-1 systemd[1]: Stopping OpenSSH server daemon...
Nov 25 09:43:50 compute-1 systemd[1]: sshd.service: Deactivated successfully.
Nov 25 09:43:50 compute-1 systemd[1]: Stopped OpenSSH server daemon.
Nov 25 09:43:50 compute-1 systemd[1]: sshd.service: Consumed 1.408s CPU time, read 32.0K from disk, written 0B to disk.
Nov 25 09:43:50 compute-1 systemd[1]: Stopped target sshd-keygen.target.
Nov 25 09:43:50 compute-1 systemd[1]: Stopping sshd-keygen.target...
Nov 25 09:43:50 compute-1 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 09:43:50 compute-1 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 09:43:50 compute-1 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 09:43:50 compute-1 systemd[1]: Reached target sshd-keygen.target.
Nov 25 09:43:50 compute-1 systemd[1]: Starting OpenSSH server daemon...
Nov 25 09:43:50 compute-1 sshd[167624]: Server listening on 0.0.0.0 port 22.
Nov 25 09:43:50 compute-1 sshd[167624]: Server listening on :: port 22.
Nov 25 09:43:50 compute-1 systemd[1]: Started OpenSSH server daemon.
Nov 25 09:43:50 compute-1 ceph-mon[79643]: pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 25 09:43:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:50 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8414002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:51 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:51 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:43:51 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:43:51 compute-1 systemd[1]: Reloading.
Nov 25 09:43:51 compute-1 systemd-rc-local-generator[167876]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:43:51 compute-1 systemd-sysv-generator[167883]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:43:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:51.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:51 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:43:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:52 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:52.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:52 compute-1 ceph-mon[79643]: pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 09:43:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:52 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:53 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8414004fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:53 compute-1 sudo[149384]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:53.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:54 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:54.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:54 compute-1 ceph-mon[79643]: pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:54 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:55 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:55.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:56 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8414005160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:56.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:56 compute-1 ceph-mon[79643]: pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:56 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:43:56 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:43:56 compute-1 systemd[1]: man-db-cache-update.service: Consumed 6.539s CPU time.
Nov 25 09:43:56 compute-1 systemd[1]: run-rf67d2f1509cf4d8696713b0c54dc9d09.service: Deactivated successfully.
Nov 25 09:43:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:56 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:57 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:57.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:58 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:58.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:58 compute-1 ceph-mon[79643]: pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:43:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:58 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8414005160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:43:59 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:43:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:59.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:00 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:00 compute-1 sudo[176297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:00 compute-1 sudo[176297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:00 compute-1 sudo[176297]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:00.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:00 compute-1 ceph-mon[79643]: pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:44:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:00 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:01 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:01.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:02 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:02.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:02 compute-1 ceph-mon[79643]: pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:02 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:03 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:03.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:04 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:04.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:04 compute-1 ceph-mon[79643]: pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:04 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:44:04.990 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:44:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:44:04.990 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:44:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:44:04.991 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:44:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:05 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:05.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:06 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:06.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:06 compute-1 ceph-mon[79643]: pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:06 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:06 compute-1 podman[176325]: 2025-11-25 09:44:06.811987533 +0000 UTC m=+0.065046034 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:44:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:07 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:07.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:07 compute-1 sudo[176342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:44:07 compute-1 sudo[176342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:07 compute-1 sudo[176342]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:07 compute-1 sudo[176367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:44:07 compute-1 sudo[176367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:08 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:08.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:08 compute-1 sudo[176367]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:08 compute-1 ceph-mon[79643]: pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:08 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:44:08 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:44:08 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:44:08 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:44:08 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:44:08 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:44:08 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:44:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:08 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c0eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:09 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:09.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:10 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8414006590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:10.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:10 compute-1 ceph-mon[79643]: pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:10 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:11 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c23a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:11 compute-1 sudo[176423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:44:11 compute-1 sudo[176423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:11 compute-1 sudo[176423]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:11.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:12 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84100c23a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:44:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:12.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:44:12 compute-1 ceph-mon[79643]: pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:12 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:44:12 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:44:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:12 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8414006590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:13 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:44:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:13.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:44:13 compute-1 podman[176449]: 2025-11-25 09:44:13.827000287 +0000 UTC m=+0.071461478 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 09:44:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:14 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:14.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:14 compute-1 ceph-mon[79643]: pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:14 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:15 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84140072a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:44:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:15.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:16 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:16.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:16 compute-1 ceph-mon[79643]: pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:16 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:17 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:17 compute-1 sudo[176599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzdgctliwfsmtfwakigmwxsrbkzstbla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063857.163666-969-31755271479933/AnsiballZ_systemd.py'
Nov 25 09:44:17 compute-1 sudo[176599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:44:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:17.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:44:17 compute-1 python3.9[176601]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:44:17 compute-1 systemd[1]: Reloading.
Nov 25 09:44:17 compute-1 systemd-rc-local-generator[176625]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:18 compute-1 systemd-sysv-generator[176628]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:18 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84140072a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:18 compute-1 sudo[176599]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:18.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:18 compute-1 sudo[176788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsgkszctqjfdsudhokopraljffqpoqin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063858.4037285-969-191135219940508/AnsiballZ_systemd.py'
Nov 25 09:44:18 compute-1 sudo[176788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:18 compute-1 ceph-mon[79643]: pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:18 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84140072a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:18 compute-1 python3.9[176790]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:44:18 compute-1 systemd[1]: Reloading.
Nov 25 09:44:18 compute-1 systemd-rc-local-generator[176813]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:18 compute-1 systemd-sysv-generator[176816]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:19 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84140072a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:19 compute-1 sudo[176788]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:19 compute-1 sudo[176979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niwrbhrewuiswyvknjujwzjejgkxvmnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063859.2608411-969-146281396098686/AnsiballZ_systemd.py'
Nov 25 09:44:19 compute-1 sudo[176979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:19 compute-1 python3.9[176981]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:44:19 compute-1 systemd[1]: Reloading.
Nov 25 09:44:19 compute-1 systemd-sysv-generator[177007]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:19 compute-1 systemd-rc-local-generator[177003]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:19.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:19 compute-1 sudo[176979]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:20 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:20 compute-1 sudo[177107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:20 compute-1 sudo[177107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:20 compute-1 sudo[177107]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:20 compute-1 sudo[177193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onlqgpvpjfswrwftsjqyyldnqehxuddj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063860.0630195-969-90597664393740/AnsiballZ_systemd.py'
Nov 25 09:44:20 compute-1 sudo[177193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:20.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:20 compute-1 python3.9[177195]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:44:20 compute-1 systemd[1]: Reloading.
Nov 25 09:44:20 compute-1 systemd-rc-local-generator[177222]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:20 compute-1 systemd-sysv-generator[177225]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:20 compute-1 ceph-mon[79643]: pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:20 compute-1 sudo[177193]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:20 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84140072a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:21 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:21 compute-1 sudo[177385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhtrvjijgjazjxzmrsajhifcrrljwjbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063861.0927415-1056-136213455635717/AnsiballZ_systemd.py'
Nov 25 09:44:21 compute-1 sudo[177385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:21 compute-1 python3.9[177387]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:21 compute-1 systemd[1]: Reloading.
Nov 25 09:44:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:21.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:21 compute-1 systemd-sysv-generator[177417]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:21 compute-1 systemd-rc-local-generator[177413]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:22 compute-1 sudo[177385]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:22 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:22 compute-1 sudo[177574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-autkcljkanqzugtdwhzauiozlskxdcmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063862.1111002-1056-86058392211125/AnsiballZ_systemd.py'
Nov 25 09:44:22 compute-1 sudo[177574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:22.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:22 compute-1 python3.9[177576]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:22 compute-1 systemd[1]: Reloading.
Nov 25 09:44:22 compute-1 ceph-mon[79643]: pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:22 compute-1 systemd-rc-local-generator[177600]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:22 compute-1 systemd-sysv-generator[177604]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:22 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:22 compute-1 sudo[177574]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:23 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84140072a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:23 compute-1 sudo[177763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adtbxxxdkqnhrgoesyqqtkxxhxblipqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063862.9448447-1056-24554757411744/AnsiballZ_systemd.py'
Nov 25 09:44:23 compute-1 sudo[177763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:23 compute-1 python3.9[177765]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:23 compute-1 systemd[1]: Reloading.
Nov 25 09:44:23 compute-1 systemd-sysv-generator[177795]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:23 compute-1 systemd-rc-local-generator[177792]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:23 compute-1 sudo[177763]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:44:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:23.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:44:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:24 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:24 compute-1 sudo[177953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giypxpslseltcnipaoogvbcrljhkewej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063863.8144438-1056-35853246878135/AnsiballZ_systemd.py'
Nov 25 09:44:24 compute-1 sudo[177953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:24 compute-1 python3.9[177955]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:24 compute-1 sudo[177953]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:24.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:24 compute-1 sudo[178108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnbzllbgyalvlzgdxwixsvnzrfituavx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063864.4612849-1056-110549123123060/AnsiballZ_systemd.py'
Nov 25 09:44:24 compute-1 sudo[178108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:24 compute-1 ceph-mon[79643]: pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:24 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:24 compute-1 python3.9[178110]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:24 compute-1 systemd[1]: Reloading.
Nov 25 09:44:25 compute-1 systemd-rc-local-generator[178134]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:25 compute-1 systemd-sysv-generator[178137]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:25 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:25 compute-1 sudo[178108]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:25 compute-1 sudo[178298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxaiarkmwizarcqerqiugumgohretkfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063865.4990659-1164-217466890269622/AnsiballZ_systemd.py'
Nov 25 09:44:25 compute-1 sudo[178298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:25.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:25 compute-1 python3.9[178300]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:44:25 compute-1 systemd[1]: Reloading.
Nov 25 09:44:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:26 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84140072a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:26 compute-1 systemd-sysv-generator[178331]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:26 compute-1 systemd-rc-local-generator[178324]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:26 compute-1 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 25 09:44:26 compute-1 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 25 09:44:26 compute-1 sudo[178298]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:26.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:26 compute-1 ceph-mon[79643]: pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:26 compute-1 sudo[178490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzzsszhfxbkazvafkghxdcikcdicwhax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063866.4955933-1188-115328211644559/AnsiballZ_systemd.py'
Nov 25 09:44:26 compute-1 sudo[178490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:26 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:26 compute-1 python3.9[178492]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:26 compute-1 sudo[178490]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:27 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:27 compute-1 sudo[178646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enxzgxspkfhblybwgvnvnjdqcputpgdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063867.1076088-1188-51184170035101/AnsiballZ_systemd.py'
Nov 25 09:44:27 compute-1 sudo[178646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:27 compute-1 python3.9[178648]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:27 compute-1 sudo[178646]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:27.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:28 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:28 compute-1 sudo[178801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifpwshsnywjzcokqkqstupdlkutqhiab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063867.8982687-1188-17378695490190/AnsiballZ_systemd.py'
Nov 25 09:44:28 compute-1 sudo[178801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:28 compute-1 python3.9[178803]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:28 compute-1 sudo[178801]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:28.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:28 compute-1 sudo[178956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vshlgthdltfwffvxvbdgibcfywuidsud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063868.480822-1188-277151926314010/AnsiballZ_systemd.py'
Nov 25 09:44:28 compute-1 sudo[178956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:28 compute-1 ceph-mon[79643]: pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:28 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84140072a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:28 compute-1 python3.9[178958]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:28 compute-1 sudo[178956]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:29 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:29 compute-1 sudo[179112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcfvvlsduxqidklgmroxhfblhpnpjkua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063869.0535007-1188-225256093465430/AnsiballZ_systemd.py'
Nov 25 09:44:29 compute-1 sudo[179112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:29 compute-1 python3.9[179114]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:29 compute-1 sudo[179112]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:29.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:29 compute-1 sudo[179267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxwuvmbtatwoyxdpebjnzjfyllgumnfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063869.633446-1188-122996322823880/AnsiballZ_systemd.py'
Nov 25 09:44:29 compute-1 sudo[179267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:30 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:30 compute-1 python3.9[179269]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:30 compute-1 sudo[179267]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:30.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:30 compute-1 sudo[179422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjqbebyyijekwmqimkansuoxtsmouxoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063870.2295752-1188-273089316431707/AnsiballZ_systemd.py'
Nov 25 09:44:30 compute-1 sudo[179422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:30 compute-1 ceph-mon[79643]: pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:44:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:30 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:30 compute-1 python3.9[179424]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:30 compute-1 sudo[179422]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:31 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84140072a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:31 compute-1 sudo[179577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nunznsncmxccsueemqfvirnkjtiyqoot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063870.973164-1188-88257868019893/AnsiballZ_systemd.py'
Nov 25 09:44:31 compute-1 sudo[179577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:31 compute-1 python3.9[179580]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:31 compute-1 sudo[179577]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:31 compute-1 sudo[179733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfioxetbzgydaoailrmqdgdxnepbocre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063871.5923471-1188-75002715611683/AnsiballZ_systemd.py'
Nov 25 09:44:31 compute-1 sudo[179733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:31.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:32 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:32 compute-1 python3.9[179735]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:32 compute-1 sudo[179733]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:32.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:32 compute-1 sudo[179888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbegfvwqxjyxroihtkikyjbhcetekrto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063872.2208552-1188-165383565271528/AnsiballZ_systemd.py'
Nov 25 09:44:32 compute-1 sudo[179888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:32 compute-1 python3.9[179890]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:32 compute-1 ceph-mon[79643]: pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:32 compute-1 sudo[179888]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:32 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:33 compute-1 sudo[180043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iesbwprluqrifdhzjarqfnfkrpxnwrkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063872.8486896-1188-153759845906676/AnsiballZ_systemd.py'
Nov 25 09:44:33 compute-1 sudo[180043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:33 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:33 compute-1 python3.9[180045]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:33 compute-1 sudo[180043]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:33 compute-1 sudo[180199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mddndswstuwbrxpdxfbiqugkfdnpwcdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063873.6245463-1188-278801880221840/AnsiballZ_systemd.py'
Nov 25 09:44:33 compute-1 sudo[180199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:33.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:34 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84140072a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:34 compute-1 python3.9[180201]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:34 compute-1 sudo[180199]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:34 compute-1 sudo[180354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aebbbsjdkmwtmgajdcydyiscgfrserhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063874.1981127-1188-142391863208871/AnsiballZ_systemd.py'
Nov 25 09:44:34 compute-1 sudo[180354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:44:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:34.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:44:34 compute-1 python3.9[180356]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:34 compute-1 sudo[180354]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:34 compute-1 ceph-mon[79643]: pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:34 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:34 compute-1 sudo[180509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqioahcyglkyfhzogngruyxogjcjmdrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063874.7589588-1188-178419305819438/AnsiballZ_systemd.py'
Nov 25 09:44:34 compute-1 sudo[180509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:35 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:35 compute-1 python3.9[180511]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:35 compute-1 sudo[180509]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:35.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:35 compute-1 sudo[180665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veppodugpyvdoqerubgytzjlnqhxtzki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063875.65226-1494-183123359628731/AnsiballZ_file.py'
Nov 25 09:44:35 compute-1 sudo[180665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:35 compute-1 python3.9[180667]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:44:36 compute-1 sudo[180665]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:36 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:36 compute-1 sudo[180817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyalqhbmfbappmvievofpkfbimtgrtgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063876.106556-1494-257951706086086/AnsiballZ_file.py'
Nov 25 09:44:36 compute-1 sudo[180817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:36.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:36 compute-1 python3.9[180819]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:44:36 compute-1 sudo[180817]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:36 compute-1 ceph-mon[79643]: pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:36 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84140072c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:36 compute-1 sudo[180980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziwsqnseczmbzycqdkndeecwnsrkdpmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063876.7114155-1494-221977782641556/AnsiballZ_file.py'
Nov 25 09:44:36 compute-1 sudo[180980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:36 compute-1 podman[180943]: 2025-11-25 09:44:36.943935668 +0000 UTC m=+0.067292030 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 09:44:37 compute-1 python3.9[180987]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:44:37 compute-1 sudo[180980]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:37 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:37 compute-1 sudo[181140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqzxcisrhcvulokxqeygwmcpyguwtvdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063877.1983109-1494-210104971824555/AnsiballZ_file.py'
Nov 25 09:44:37 compute-1 sudo[181140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:37 compute-1 python3.9[181142]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:44:37 compute-1 sudo[181140]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:37 compute-1 sudo[181292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuhuqvwlqdgryphefjvrkipxrsimopat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063877.6445448-1494-214584350756502/AnsiballZ_file.py'
Nov 25 09:44:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:37 compute-1 sudo[181292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:37.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:37 compute-1 python3.9[181294]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:44:38 compute-1 sudo[181292]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:38 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:38 compute-1 sudo[181444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ladgevfflelqfndnhydbamtozrcypdxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063878.110094-1494-125411847338033/AnsiballZ_file.py'
Nov 25 09:44:38 compute-1 sudo[181444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:44:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:38.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:44:38 compute-1 python3.9[181446]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:44:38 compute-1 sudo[181444]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:38 compute-1 ceph-mon[79643]: pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:38 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c002660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:38 compute-1 sudo[181596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dngudutzkisgluprwgasmqbukqdqcrbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063878.6570005-1623-23627916353930/AnsiballZ_stat.py'
Nov 25 09:44:38 compute-1 sudo[181596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:39 compute-1 python3.9[181598]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:39 compute-1 sudo[181596]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:39 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8420004760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:39 compute-1 sudo[181722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spijlaicameygdvzcgemrjutroyvinmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063878.6570005-1623-23627916353930/AnsiballZ_copy.py'
Nov 25 09:44:39 compute-1 sudo[181722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:39 compute-1 python3.9[181724]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063878.6570005-1623-23627916353930/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:39 compute-1 sudo[181722]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:39.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:39 compute-1 sudo[181874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcgffqpllcpgrrunocvdccvfubcpuxxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063879.7712746-1623-81981063340276/AnsiballZ_stat.py'
Nov 25 09:44:39 compute-1 sudo[181874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:40 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408005df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:40 compute-1 python3.9[181876]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:40 compute-1 sudo[181874]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:40 compute-1 sudo[181905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:40 compute-1 sudo[181905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:40 compute-1 sudo[181905]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:40 compute-1 sudo[182024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwdwzojjcywynndxzemxbsjjbindyhhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063879.7712746-1623-81981063340276/AnsiballZ_copy.py'
Nov 25 09:44:40 compute-1 sudo[182024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:40.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:40 compute-1 python3.9[182026]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063879.7712746-1623-81981063340276/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:40 compute-1 sudo[182024]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:40 compute-1 ceph-mon[79643]: pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:40 compute-1 sudo[182176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fizkubenoiblduconbxkenfccmbazjfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063880.6161208-1623-264945578761411/AnsiballZ_stat.py'
Nov 25 09:44:40 compute-1 sudo[182176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:40 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:40 compute-1 python3.9[182178]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:40 compute-1 sudo[182176]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c003210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:41 compute-1 sudo[182302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwnsvorvuhuupnngpgsyktcmkazvqogk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063880.6161208-1623-264945578761411/AnsiballZ_copy.py'
Nov 25 09:44:41 compute-1 sudo[182302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:41 compute-1 python3.9[182304]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063880.6161208-1623-264945578761411/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:41 compute-1 sudo[182302]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:41 compute-1 sudo[182454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehczhcaciyipwcphkqttnoikrwwdpkql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063881.5002747-1623-156882281351041/AnsiballZ_stat.py'
Nov 25 09:44:41 compute-1 sudo[182454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:41.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:41 compute-1 python3.9[182456]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:41 compute-1 sudo[182454]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:42 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8420005260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:42 compute-1 sudo[182579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfzxzpsxznoleeskajljwqjjthogadnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063881.5002747-1623-156882281351041/AnsiballZ_copy.py'
Nov 25 09:44:42 compute-1 sudo[182579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:42 compute-1 python3.9[182581]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063881.5002747-1623-156882281351041/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:42 compute-1 sudo[182579]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:42.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:42 compute-1 sudo[182731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lalxqhuysuhlemogcftxpaqjduevtsxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063882.4113047-1623-196773030236585/AnsiballZ_stat.py'
Nov 25 09:44:42 compute-1 sudo[182731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:42 compute-1 ceph-mon[79643]: pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:42 compute-1 python3.9[182733]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:42 compute-1 sudo[182731]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:42 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:43 compute-1 sudo[182856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrsuggckwlcbjjnilmlbialhlyvxdeqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063882.4113047-1623-196773030236585/AnsiballZ_copy.py'
Nov 25 09:44:43 compute-1 sudo[182856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:43 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:43 compute-1 python3.9[182858]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063882.4113047-1623-196773030236585/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:43 compute-1 sudo[182856]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:43 compute-1 sudo[183009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqwxrgzsvikfriazszthlxuidudpkjpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063883.3749886-1623-269469150084952/AnsiballZ_stat.py'
Nov 25 09:44:43 compute-1 sudo[183009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:43 compute-1 python3.9[183011]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:43 compute-1 sudo[183009]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:43.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:44 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c003210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:44 compute-1 sudo[183146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-felguekypvbsunvgkwocgmfhheiiqgwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063883.3749886-1623-269469150084952/AnsiballZ_copy.py'
Nov 25 09:44:44 compute-1 sudo[183146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:44 compute-1 podman[183108]: 2025-11-25 09:44:44.086773008 +0000 UTC m=+0.057012597 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:44:44 compute-1 python3.9[183153]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063883.3749886-1623-269469150084952/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:44 compute-1 sudo[183146]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:44.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:44 compute-1 sudo[183309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lndgsoppjevfoyylulbdceybkdzcxupz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063884.3417284-1623-168099207665521/AnsiballZ_stat.py'
Nov 25 09:44:44 compute-1 sudo[183309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:44 compute-1 python3.9[183311]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:44 compute-1 sudo[183309]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:44 compute-1 ceph-mon[79643]: pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:44 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8420005260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:44 compute-1 sudo[183432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqtogtgqoacouazwmqrxhaawptzengtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063884.3417284-1623-168099207665521/AnsiballZ_copy.py'
Nov 25 09:44:44 compute-1 sudo[183432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:45 compute-1 python3.9[183434]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063884.3417284-1623-168099207665521/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:45 compute-1 sudo[183432]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:45 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:45 compute-1 sudo[183585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ievxsqizbjmhkpkluxegtwuwmvdlmddm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063885.1811347-1623-71185909939538/AnsiballZ_stat.py'
Nov 25 09:44:45 compute-1 sudo[183585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:45 compute-1 python3.9[183587]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:45 compute-1 sudo[183585]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:44:45 compute-1 sudo[183710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpeocemoeerlmgszbqegiqddegujziag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063885.1811347-1623-71185909939538/AnsiballZ_copy.py'
Nov 25 09:44:45 compute-1 sudo[183710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:45.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:45 compute-1 python3.9[183712]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063885.1811347-1623-71185909939538/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:45 compute-1 sudo[183710]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:46 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:46 compute-1 sudo[183862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdzqgkwnfhabxzrobbgemdevkdmchuyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063886.1465135-1962-61393579320721/AnsiballZ_command.py'
Nov 25 09:44:46 compute-1 sudo[183862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:46.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:46 compute-1 python3.9[183864]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 25 09:44:46 compute-1 sudo[183862]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:46 compute-1 ceph-mon[79643]: pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:46 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c003210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:46 compute-1 sudo[184015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orgncicmyemhkpdeasfarcuzncduzeyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063886.7314923-1989-132440709557316/AnsiballZ_file.py'
Nov 25 09:44:46 compute-1 sudo[184015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:47 compute-1 python3.9[184017]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:47 compute-1 sudo[184015]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:47 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8420005f70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:47 compute-1 sudo[184168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crvgnrovkwkblitlelicvfhkfjcmgptx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063887.1845944-1989-30920351696414/AnsiballZ_file.py'
Nov 25 09:44:47 compute-1 sudo[184168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:47 compute-1 python3.9[184170]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:47 compute-1 sudo[184168]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:47 compute-1 sudo[184320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcgsdihjuamyenqtpukqzslpaxoyyssf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063887.6426673-1989-88110617538597/AnsiballZ_file.py'
Nov 25 09:44:47 compute-1 sudo[184320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:47.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:47 compute-1 python3.9[184322]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:47 compute-1 sudo[184320]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:48 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:48 compute-1 sudo[184472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsvcwixlmpwwphqrydagbryihsqjrxfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063888.0839837-1989-244993524761897/AnsiballZ_file.py'
Nov 25 09:44:48 compute-1 sudo[184472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:48.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:48 compute-1 python3.9[184474]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:48 compute-1 sudo[184472]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:48 compute-1 sudo[184624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eedlzoqwiqyiocuqbvvaudiobfnflmng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063888.541038-1989-91056314771488/AnsiballZ_file.py'
Nov 25 09:44:48 compute-1 sudo[184624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:48 compute-1 ceph-mon[79643]: pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:48 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:48 compute-1 python3.9[184626]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:48 compute-1 sudo[184624]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:49 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c004930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:49 compute-1 sudo[184777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbwunajqlptoqvvddbqceurdfsiganqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063889.098228-1989-161307479172723/AnsiballZ_file.py'
Nov 25 09:44:49 compute-1 sudo[184777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:49 compute-1 python3.9[184779]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:49 compute-1 sudo[184777]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:49 compute-1 sudo[184929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztgongnwlqxirgcpgqlgetqvnrxnjqca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063889.567482-1989-256106984310352/AnsiballZ_file.py'
Nov 25 09:44:49 compute-1 sudo[184929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:49.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:49 compute-1 python3.9[184931]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:49 compute-1 sudo[184929]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:50 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8420005f70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:50 compute-1 sudo[185081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snhhkaoyiuuqwejogseznjiqkzeovocu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063890.0310605-1989-61469798701917/AnsiballZ_file.py'
Nov 25 09:44:50 compute-1 sudo[185081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:50 compute-1 python3.9[185083]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:50 compute-1 sudo[185081]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:50.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:50 compute-1 sudo[185233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbrpuanrdlisgxirmfojqpncsijafafq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063890.4828207-1989-73257694478610/AnsiballZ_file.py'
Nov 25 09:44:50 compute-1 sudo[185233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:50 compute-1 ceph-mon[79643]: pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:50 compute-1 python3.9[185235]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:50 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:50 compute-1 sudo[185233]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:51 compute-1 sudo[185385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rssicsormygtgxgbyksyqunnizzbdrgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063890.9329166-1989-87761131857610/AnsiballZ_file.py'
Nov 25 09:44:51 compute-1 sudo[185385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:51 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:51 compute-1 python3.9[185387]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:51 compute-1 sudo[185385]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:51 compute-1 sudo[185538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylyofbccczyexrvkickovqyechwylwxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063891.376526-1989-17441357399502/AnsiballZ_file.py'
Nov 25 09:44:51 compute-1 sudo[185538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:51 compute-1 python3.9[185540]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:51 compute-1 sudo[185538]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:51.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:52 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c004930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:52 compute-1 sudo[185690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llbhnjjvozvomyhyshwkoipwzanwmfnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063891.852069-1989-85239010822299/AnsiballZ_file.py'
Nov 25 09:44:52 compute-1 sudo[185690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:52 compute-1 python3.9[185692]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:52 compute-1 sudo[185690]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:52.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:52 compute-1 sudo[185842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krsymwhojkytvedinfnympfabrngwphe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063892.4912562-1989-170362573437707/AnsiballZ_file.py'
Nov 25 09:44:52 compute-1 sudo[185842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:52 compute-1 ceph-mon[79643]: pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:52 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8420006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:52 compute-1 python3.9[185844]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:52 compute-1 sudo[185842]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:53 compute-1 sudo[185994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrtmulwezixnikydrqxbjfyquibnjtwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063892.9457078-1989-124314806992680/AnsiballZ_file.py'
Nov 25 09:44:53 compute-1 sudo[185994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:53 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:53 compute-1 python3.9[185996]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:53 compute-1 sudo[185994]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:53 compute-1 sudo[186147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-namlcxnmcqmfmirriprmyedjbjdrdjqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063893.5114632-2286-276262510058004/AnsiballZ_stat.py'
Nov 25 09:44:53 compute-1 sudo[186147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:53.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:53 compute-1 python3.9[186149]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:53 compute-1 sudo[186147]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:54 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:54 compute-1 sudo[186270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chukvvlkzumfwgrbkhtbislelajzbfas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063893.5114632-2286-276262510058004/AnsiballZ_copy.py'
Nov 25 09:44:54 compute-1 sudo[186270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:54 compute-1 python3.9[186272]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063893.5114632-2286-276262510058004/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:54 compute-1 sudo[186270]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:54.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:54 compute-1 sudo[186422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjhmpvakqyuvkgavmxrvmqymgpyzvbip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063894.3823473-2286-263444394508100/AnsiballZ_stat.py'
Nov 25 09:44:54 compute-1 sudo[186422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:54 compute-1 python3.9[186424]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:54 compute-1 sudo[186422]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:54 compute-1 ceph-mon[79643]: pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:54 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:54 compute-1 sudo[186545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbsjmxojagotqjjtnnwjzwnqzsvndolh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063894.3823473-2286-263444394508100/AnsiballZ_copy.py'
Nov 25 09:44:54 compute-1 sudo[186545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:55 compute-1 python3.9[186547]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063894.3823473-2286-263444394508100/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:55 compute-1 sudo[186545]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:55 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8420006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:55 compute-1 sudo[186698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohmawoqbbihgfcdvnltnnjaqkzbqcrbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063895.3430345-2286-260646960583096/AnsiballZ_stat.py'
Nov 25 09:44:55 compute-1 sudo[186698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:55 compute-1 python3.9[186700]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:55 compute-1 sudo[186698]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:55 compute-1 ceph-mon[79643]: pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:55.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:55 compute-1 sudo[186821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqpbvnjmaacflhkysqsnflfucjhsaqhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063895.3430345-2286-260646960583096/AnsiballZ_copy.py'
Nov 25 09:44:55 compute-1 sudo[186821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:56 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:56 compute-1 python3.9[186823]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063895.3430345-2286-260646960583096/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:56 compute-1 sudo[186821]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:56 compute-1 sudo[186973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrhxayvsljkhucshrmjedrxvjlkzvxhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063896.220683-2286-22461067579873/AnsiballZ_stat.py'
Nov 25 09:44:56 compute-1 sudo[186973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:56.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:56 compute-1 python3.9[186975]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:56 compute-1 sudo[186973]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:56 compute-1 sudo[187096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esvqyqjhxaslozxglvsixtaabyymsoop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063896.220683-2286-22461067579873/AnsiballZ_copy.py'
Nov 25 09:44:56 compute-1 sudo[187096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:56 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:56 compute-1 python3.9[187098]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063896.220683-2286-22461067579873/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:56 compute-1 sudo[187096]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:57 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:57 compute-1 sudo[187249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbsbqgfatbwnvwxvpiymudxfuocqbcga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063897.0635135-2286-259702800361847/AnsiballZ_stat.py'
Nov 25 09:44:57 compute-1 sudo[187249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:57 compute-1 python3.9[187251]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:57 compute-1 sudo[187249]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:57 compute-1 sudo[187372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yidocmfyhcekwqmletyrzoxhxfmkzggc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063897.0635135-2286-259702800361847/AnsiballZ_copy.py'
Nov 25 09:44:57 compute-1 sudo[187372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:57 compute-1 python3.9[187374]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063897.0635135-2286-259702800361847/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:57 compute-1 sudo[187372]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:57.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:58 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8420006c80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:58 compute-1 sudo[187524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbubimglsdcqfiabxvvrveajhechooxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063897.947187-2286-235345186397050/AnsiballZ_stat.py'
Nov 25 09:44:58 compute-1 sudo[187524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:58 compute-1 ceph-mon[79643]: pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:58 compute-1 python3.9[187526]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:58 compute-1 sudo[187524]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:44:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:58.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:44:58 compute-1 sudo[187647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lovkictymllljddwogkxrwacmqmpfeix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063897.947187-2286-235345186397050/AnsiballZ_copy.py'
Nov 25 09:44:58 compute-1 sudo[187647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:58 compute-1 python3.9[187649]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063897.947187-2286-235345186397050/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:58 compute-1 sudo[187647]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:58 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:59 compute-1 sudo[187799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayeclvuxxrsgnhiqxkvqrkpfpadhgted ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063898.939985-2286-246261034015998/AnsiballZ_stat.py'
Nov 25 09:44:59 compute-1 sudo[187799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:44:59 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:59 compute-1 python3.9[187801]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:59 compute-1 sudo[187799]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:59 compute-1 sudo[187923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbvvzpqzkjocfmcuyurrooeneyhwzocl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063898.939985-2286-246261034015998/AnsiballZ_copy.py'
Nov 25 09:44:59 compute-1 sudo[187923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:59 compute-1 python3.9[187925]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063898.939985-2286-246261034015998/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:59 compute-1 sudo[187923]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:44:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:59.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:59 compute-1 sudo[188075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klpydyqadbkgrjupqvdbilyxiypgfppt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063899.7922723-2286-143459778504150/AnsiballZ_stat.py'
Nov 25 09:44:59 compute-1 sudo[188075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:00 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:00 compute-1 python3.9[188077]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:00 compute-1 sudo[188075]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:00 compute-1 sudo[188124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:45:00 compute-1 sudo[188124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:00 compute-1 sudo[188124]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:00 compute-1 ceph-mon[79643]: pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:45:00 compute-1 sudo[188223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmemtvmwcyoatopomnoiyjlpbxfigqwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063899.7922723-2286-143459778504150/AnsiballZ_copy.py'
Nov 25 09:45:00 compute-1 sudo[188223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:00.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:00 compute-1 python3.9[188225]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063899.7922723-2286-143459778504150/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:00 compute-1 sudo[188223]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:00 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:00 compute-1 sudo[188375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dddppobylhaaqbabfkpcxamerkwapzea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063900.6724563-2286-125473423404562/AnsiballZ_stat.py'
Nov 25 09:45:00 compute-1 sudo[188375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:01 compute-1 python3.9[188377]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:01 compute-1 sudo[188375]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:01 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:01 compute-1 sudo[188499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhnsfvwsuhktrmwlmtnceyrflxnhdyug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063900.6724563-2286-125473423404562/AnsiballZ_copy.py'
Nov 25 09:45:01 compute-1 sudo[188499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:01 compute-1 python3.9[188501]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063900.6724563-2286-125473423404562/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:01 compute-1 sudo[188499]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:01.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:01 compute-1 sudo[188651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtprmexaqzpuylpljexifknpwhugrbxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063901.6699781-2286-73869892499660/AnsiballZ_stat.py'
Nov 25 09:45:01 compute-1 sudo[188651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:02 compute-1 python3.9[188653]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:02 compute-1 sudo[188651]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:02 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:02 compute-1 sudo[188774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwylualzcepdxdopaqlqdmekveudhtwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063901.6699781-2286-73869892499660/AnsiballZ_copy.py'
Nov 25 09:45:02 compute-1 sudo[188774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:02 compute-1 ceph-mon[79643]: pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:45:02 compute-1 python3.9[188776]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063901.6699781-2286-73869892499660/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:02 compute-1 sudo[188774]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:02.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:02 compute-1 sudo[188926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olrugntpohcwjpsgoxhleozmmdlrqdvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063902.5280652-2286-248600294110632/AnsiballZ_stat.py'
Nov 25 09:45:02 compute-1 sudo[188926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:02 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8420007990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:02 compute-1 python3.9[188928]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:02 compute-1 sudo[188926]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:03 compute-1 sudo[189049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwkmeuxmxhwutufuufcujwrbmtwhryhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063902.5280652-2286-248600294110632/AnsiballZ_copy.py'
Nov 25 09:45:03 compute-1 sudo[189049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:03 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:03 compute-1 python3.9[189051]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063902.5280652-2286-248600294110632/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:03 compute-1 sudo[189049]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:03 compute-1 sudo[189202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anoubbepkttqhwfvekwfwzgegoykbiph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063903.4516659-2286-86735279741858/AnsiballZ_stat.py'
Nov 25 09:45:03 compute-1 sudo[189202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:03 compute-1 python3.9[189204]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:03 compute-1 sudo[189202]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:03.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:04 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:04 compute-1 sudo[189325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuuutiwninqzhzkvusulgeeyxzxlxsqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063903.4516659-2286-86735279741858/AnsiballZ_copy.py'
Nov 25 09:45:04 compute-1 sudo[189325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:04 compute-1 ceph-mon[79643]: pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:04 compute-1 python3.9[189327]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063903.4516659-2286-86735279741858/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:04.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:04 compute-1 sudo[189325]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:04 compute-1 sudo[189477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvqygpwjbvcdjjoijvqcoteiijayaogg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063904.5368917-2286-69353606788771/AnsiballZ_stat.py'
Nov 25 09:45:04 compute-1 sudo[189477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:04 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:04 compute-1 python3.9[189479]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:04 compute-1 sudo[189477]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:45:04.992 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:45:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:45:04.992 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:45:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:45:04.992 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:45:05 compute-1 sudo[189600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvbxfxvaorkddzzqcviffrvpxjffrsmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063904.5368917-2286-69353606788771/AnsiballZ_copy.py'
Nov 25 09:45:05 compute-1 sudo[189600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:05 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8420007990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:05 compute-1 python3.9[189602]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063904.5368917-2286-69353606788771/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:05 compute-1 sudo[189600]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:05 compute-1 sudo[189753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doeqkbkvowadjjqwflwttcvajlkpelww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063905.4256306-2286-265146004356791/AnsiballZ_stat.py'
Nov 25 09:45:05 compute-1 sudo[189753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:05 compute-1 python3.9[189755]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:05 compute-1 sudo[189753]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:05.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:06 compute-1 sudo[189876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crcrozubjhaxytegdalpfcstnnfndkgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063905.4256306-2286-265146004356791/AnsiballZ_copy.py'
Nov 25 09:45:06 compute-1 sudo[189876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:06 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:06 compute-1 python3.9[189878]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063905.4256306-2286-265146004356791/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:06 compute-1 sudo[189876]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:06 compute-1 ceph-mon[79643]: pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:06.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:06 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:07 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:07 compute-1 podman[190003]: 2025-11-25 09:45:07.457902027 +0000 UTC m=+0.036905314 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 09:45:07 compute-1 python3.9[190042]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:45:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:07.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:08 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:08 compute-1 sudo[190199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgsvmqizkpkvncpvktidviziyhtwxzup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063907.8136322-2904-60560342517217/AnsiballZ_seboolean.py'
Nov 25 09:45:08 compute-1 sudo[190199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:08 compute-1 python3.9[190201]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 25 09:45:08 compute-1 ceph-mon[79643]: pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:45:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:08.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:08 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:09 compute-1 sudo[190199]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:09 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:09 compute-1 sudo[190358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imqddbugzoligfehkrwszkqddhalnsmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063909.3661838-2928-9861458693405/AnsiballZ_copy.py'
Nov 25 09:45:09 compute-1 dbus-broker-launch[736]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 25 09:45:09 compute-1 sudo[190358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:09 compute-1 python3.9[190360]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:09 compute-1 sudo[190358]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:09.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:10 compute-1 sudo[190510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxlhbfrvgsbcqdfhmbdxnxkwffahlcdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063909.865625-2928-217829359619669/AnsiballZ_copy.py'
Nov 25 09:45:10 compute-1 sudo[190510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:10 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:10 compute-1 python3.9[190512]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:10 compute-1 sudo[190510]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:10 compute-1 ceph-mon[79643]: pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:10.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:10 compute-1 sudo[190662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnhjzsknxyutltgpliqnptzbqqgrkbim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063910.4240787-2928-60831202018559/AnsiballZ_copy.py'
Nov 25 09:45:10 compute-1 sudo[190662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:10 compute-1 python3.9[190664]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:10 compute-1 sudo[190662]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:10 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:11 compute-1 sudo[190814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvwatwzofysdeivzkevynuhbqeibgnmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063910.876568-2928-116656849296500/AnsiballZ_copy.py'
Nov 25 09:45:11 compute-1 sudo[190814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:11 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:11 compute-1 python3.9[190816]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:11 compute-1 sudo[190814]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:11 compute-1 sudo[190967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skvuczwuewwubvwpnfziddbvwityrvdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063911.3434238-2928-267053442159543/AnsiballZ_copy.py'
Nov 25 09:45:11 compute-1 sudo[190967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:11 compute-1 python3.9[190969]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:11 compute-1 sudo[190967]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:11 compute-1 sudo[190994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:45:11 compute-1 sudo[190994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:11 compute-1 sudo[190994]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:11.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:11 compute-1 sudo[191019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:45:11 compute-1 sudo[191019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:12 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:12 compute-1 sudo[191180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocxpcxaoacszpvmlfmkruwgenrddpdee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063911.9348586-3036-278017610396329/AnsiballZ_copy.py'
Nov 25 09:45:12 compute-1 sudo[191180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094512 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:45:12 compute-1 python3.9[191182]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:12 compute-1 sudo[191019]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:12 compute-1 sudo[191180]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:12 compute-1 ceph-mon[79643]: pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:45:12 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:45:12 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:45:12 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:45:12 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:45:12 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:45:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:12.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:12 compute-1 sudo[191349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acqjtspulacrcohhadwutkzjsxrwxobl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063912.4165816-3036-125125091407223/AnsiballZ_copy.py'
Nov 25 09:45:12 compute-1 sudo[191349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:12 compute-1 python3.9[191351]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:12 compute-1 sudo[191349]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:12 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:13 compute-1 sudo[191501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tghqxmxfzdaytdrwcaxhekjpyqkaaxoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063912.8709104-3036-246125731808519/AnsiballZ_copy.py'
Nov 25 09:45:13 compute-1 sudo[191501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:13 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:13 compute-1 python3.9[191503]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:13 compute-1 sudo[191501]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:13 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:45:13 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:45:13 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:45:13 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:45:13 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:45:13 compute-1 sudo[191654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvbjizizkwzouwabyfdvoqoovgtnpsrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063913.3428092-3036-32230675251758/AnsiballZ_copy.py'
Nov 25 09:45:13 compute-1 sudo[191654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:13 compute-1 python3.9[191656]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:13 compute-1 sudo[191654]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:13.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:14 compute-1 sudo[191806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltrvasvhzsaiwhmagjalqgporgfowpro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063913.8447165-3036-207928912679330/AnsiballZ_copy.py'
Nov 25 09:45:14 compute-1 sudo[191806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:14 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:14 compute-1 python3.9[191808]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:14 compute-1 sudo[191806]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:14 compute-1 podman[191809]: 2025-11-25 09:45:14.261042119 +0000 UTC m=+0.067581135 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 25 09:45:14 compute-1 ceph-mon[79643]: pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:45:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:14.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:14 compute-1 sudo[191981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hncsytyarvnjafrppqgmbfwnpzmbinjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063914.6108787-3144-259000979323691/AnsiballZ_systemd.py'
Nov 25 09:45:14 compute-1 sudo[191981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:14 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:15 compute-1 python3.9[191983]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:45:15 compute-1 systemd[1]: Reloading.
Nov 25 09:45:15 compute-1 systemd-rc-local-generator[192003]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:15 compute-1 systemd-sysv-generator[192007]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:15 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:15 compute-1 systemd[1]: Starting libvirt logging daemon socket...
Nov 25 09:45:15 compute-1 systemd[1]: Listening on libvirt logging daemon socket.
Nov 25 09:45:15 compute-1 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 25 09:45:15 compute-1 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 25 09:45:15 compute-1 systemd[1]: Starting libvirt logging daemon...
Nov 25 09:45:15 compute-1 systemd[1]: Started libvirt logging daemon.
Nov 25 09:45:15 compute-1 sudo[191981]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:45:15 compute-1 sudo[192175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufbpimtswvaeedrznjwnatniumprpciw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063915.5052867-3144-126523364805623/AnsiballZ_systemd.py'
Nov 25 09:45:15 compute-1 sudo[192175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:15 compute-1 sudo[192178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:45:15 compute-1 sudo[192178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:15 compute-1 sudo[192178]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:15.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:15 compute-1 python3.9[192177]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:45:15 compute-1 systemd[1]: Reloading.
Nov 25 09:45:16 compute-1 systemd-sysv-generator[192226]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:16 compute-1 systemd-rc-local-generator[192223]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:16 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f00021c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:16 compute-1 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 25 09:45:16 compute-1 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 25 09:45:16 compute-1 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 25 09:45:16 compute-1 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 25 09:45:16 compute-1 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 25 09:45:16 compute-1 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 25 09:45:16 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 09:45:16 compute-1 systemd[1]: Started libvirt nodedev daemon.
Nov 25 09:45:16 compute-1 sudo[192175]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:16.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:16 compute-1 sudo[192416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xapakbtdepkeensnuckfzsydbzjbdyxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063916.436108-3144-204189248307359/AnsiballZ_systemd.py'
Nov 25 09:45:16 compute-1 sudo[192416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:16 compute-1 ceph-mon[79643]: pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:45:16 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:45:16 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:45:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:16 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:16 compute-1 python3.9[192418]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:45:16 compute-1 systemd[1]: Reloading.
Nov 25 09:45:16 compute-1 systemd-rc-local-generator[192439]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:16 compute-1 systemd-sysv-generator[192442]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:17 compute-1 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 25 09:45:17 compute-1 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 25 09:45:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:17 compute-1 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 25 09:45:17 compute-1 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 25 09:45:17 compute-1 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 25 09:45:17 compute-1 systemd[1]: Starting libvirt proxy daemon...
Nov 25 09:45:17 compute-1 systemd[1]: Started libvirt proxy daemon.
Nov 25 09:45:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:17 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:17 compute-1 sudo[192416]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:17 compute-1 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 25 09:45:17 compute-1 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 25 09:45:17 compute-1 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 25 09:45:17 compute-1 sudo[192633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eafzekeidmihmksqddjgpaxkxuwegodc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063917.3106813-3144-145775949570746/AnsiballZ_systemd.py'
Nov 25 09:45:17 compute-1 sudo[192633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:17 compute-1 python3.9[192638]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:45:17 compute-1 systemd[1]: Reloading.
Nov 25 09:45:17 compute-1 systemd-rc-local-generator[192661]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:17 compute-1 systemd-sysv-generator[192664]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:17.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:18 compute-1 systemd[1]: Listening on libvirt locking daemon socket.
Nov 25 09:45:18 compute-1 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 25 09:45:18 compute-1 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 25 09:45:18 compute-1 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 25 09:45:18 compute-1 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 25 09:45:18 compute-1 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 25 09:45:18 compute-1 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 25 09:45:18 compute-1 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 25 09:45:18 compute-1 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 25 09:45:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:18 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:18 compute-1 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 25 09:45:18 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 09:45:18 compute-1 systemd[1]: Started libvirt QEMU daemon.
Nov 25 09:45:18 compute-1 sudo[192633]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:18 compute-1 setroubleshoot[192455]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l abb1d413-7899-443b-9d22-4909cbcfd49a
Nov 25 09:45:18 compute-1 setroubleshoot[192455]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 25 09:45:18 compute-1 setroubleshoot[192455]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l abb1d413-7899-443b-9d22-4909cbcfd49a
Nov 25 09:45:18 compute-1 setroubleshoot[192455]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 25 09:45:18 compute-1 sudo[192854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouuxcdcbgqtiiapqiqcztipidomcwzeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063918.2131248-3144-113463481245303/AnsiballZ_systemd.py'
Nov 25 09:45:18 compute-1 sudo[192854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:18.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:18 compute-1 python3.9[192856]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:45:18 compute-1 systemd[1]: Reloading.
Nov 25 09:45:18 compute-1 ceph-mon[79643]: pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:18 compute-1 systemd-rc-local-generator[192876]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:18 compute-1 systemd-sysv-generator[192880]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:18 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f00021c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:18 compute-1 systemd[1]: Starting libvirt secret daemon socket...
Nov 25 09:45:18 compute-1 systemd[1]: Listening on libvirt secret daemon socket.
Nov 25 09:45:18 compute-1 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 25 09:45:18 compute-1 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 25 09:45:18 compute-1 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 25 09:45:18 compute-1 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 25 09:45:18 compute-1 systemd[1]: Starting libvirt secret daemon...
Nov 25 09:45:18 compute-1 systemd[1]: Started libvirt secret daemon.
Nov 25 09:45:18 compute-1 sudo[192854]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:19 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:19.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:19 compute-1 sudo[193068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lellqdhtsnekqbooqwttkcvuqzlmeueu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063919.7411463-3256-196038615085515/AnsiballZ_file.py'
Nov 25 09:45:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:19 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:45:19 compute-1 sudo[193068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:20 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:20 compute-1 python3.9[193070]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:20 compute-1 sudo[193068]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:20 compute-1 sudo[193115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:45:20 compute-1 sudo[193115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:20 compute-1 sudo[193115]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:20.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:20 compute-1 sudo[193245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrbogtryfcsktlobplznanwywmxgqmha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063920.305838-3279-87765851415654/AnsiballZ_find.py'
Nov 25 09:45:20 compute-1 sudo[193245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:20 compute-1 python3.9[193247]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 09:45:20 compute-1 sudo[193245]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:20 compute-1 ceph-mon[79643]: pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:45:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:20 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:21 compute-1 sudo[193397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fezzqyaopdftrndquzsvcmbtnjqydxqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063920.8980956-3303-200469485322280/AnsiballZ_command.py'
Nov 25 09:45:21 compute-1 sudo[193397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:21 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f00021c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:21 compute-1 python3.9[193399]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:45:21 compute-1 sudo[193397]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:21 compute-1 python3.9[193554]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 09:45:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:21.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:22 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:22.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:22 compute-1 ceph-mon[79643]: pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:45:22 compute-1 python3.9[193704]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:22 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:22 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:45:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:22 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:45:23 compute-1 python3.9[193825]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063922.2398481-3360-114700800850625/.source.xml follow=False _original_basename=secret.xml.j2 checksum=ee7fcb172a9e9a6851069e0487499aace39188fe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:23 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:23 compute-1 sudo[193976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omrghrlmjowkpgfprnwmcilxykoikrtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063923.33779-3405-201483016350313/AnsiballZ_command.py'
Nov 25 09:45:23 compute-1 sudo[193976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:23 compute-1 python3.9[193978]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:45:23 compute-1 polkitd[43350]: Registered Authentication Agent for unix-process:193980:270826 (system bus name :1.1862 [pkttyagent --process 193980 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 25 09:45:23 compute-1 polkitd[43350]: Unregistered Authentication Agent for unix-process:193980:270826 (system bus name :1.1862, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 25 09:45:23 compute-1 polkitd[43350]: Registered Authentication Agent for unix-process:193979:270826 (system bus name :1.1863 [pkttyagent --process 193979 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 25 09:45:23 compute-1 polkitd[43350]: Unregistered Authentication Agent for unix-process:193979:270826 (system bus name :1.1863, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 25 09:45:23 compute-1 sudo[193976]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:23.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:24 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:24.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:24 compute-1 ceph-mon[79643]: pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:45:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:24 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:25 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:25 compute-1 python3.9[194140]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:25 compute-1 sudo[194291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laoebeivbcojkjgcsyyythiqpofgmqfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063925.4639025-3453-198005815170536/AnsiballZ_command.py'
Nov 25 09:45:25 compute-1 sudo[194291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:25.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:25 compute-1 sudo[194291]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:25 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:45:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:26 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:26 compute-1 sudo[194446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvgyvpggdqyahuaxufplzvaetjhwjtns ; FSID=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 KEY=AQBHdyVpAAAAABAACuXVpdObkUXtdSdlcr1vHw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063926.161876-3477-179807776206529/AnsiballZ_command.py'
Nov 25 09:45:26 compute-1 sudo[194446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:26.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:26 compute-1 polkitd[43350]: Registered Authentication Agent for unix-process:194449:271108 (system bus name :1.1866 [pkttyagent --process 194449 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 25 09:45:26 compute-1 polkitd[43350]: Unregistered Authentication Agent for unix-process:194449:271108 (system bus name :1.1866, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 25 09:45:26 compute-1 sudo[194446]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:26 compute-1 ceph-mon[79643]: pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:45:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:26 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:26 compute-1 sudo[194604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgzjrwnubzsjuueroewaizfmqfadfotl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063926.7674227-3501-245890133304215/AnsiballZ_copy.py'
Nov 25 09:45:26 compute-1 sudo[194604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:27 compute-1 python3.9[194606]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:27 compute-1 sudo[194604]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:27 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428002630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:27 compute-1 sudo[194757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzivrtlcibxqvoxumjcakkzjzzujrtqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063927.309463-3525-277467235479792/AnsiballZ_stat.py'
Nov 25 09:45:27 compute-1 sudo[194757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:27 compute-1 python3.9[194759]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:27 compute-1 sudo[194757]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:27.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:27 compute-1 sudo[194880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkpvlvpfcvfmhdsmotlfllabwqctoyoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063927.309463-3525-277467235479792/AnsiballZ_copy.py'
Nov 25 09:45:27 compute-1 sudo[194880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:28 compute-1 python3.9[194882]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063927.309463-3525-277467235479792/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:28 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c00c920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:28 compute-1 sudo[194880]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:28 compute-1 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 25 09:45:28 compute-1 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 25 09:45:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:28.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:28 compute-1 sudo[195032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqdlnmremgvttzcwsjavxdcggdcrmeza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063928.3814049-3573-3406476006057/AnsiballZ_file.py'
Nov 25 09:45:28 compute-1 sudo[195032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:28 compute-1 python3.9[195034]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:28 compute-1 ceph-mon[79643]: pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:45:28 compute-1 sudo[195032]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:28 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:29 compute-1 sudo[195184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqtwjyympxfuzbclwfqfpyotlvshiibu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063928.9952333-3597-25410485697503/AnsiballZ_stat.py'
Nov 25 09:45:29 compute-1 sudo[195184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:29 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:29 compute-1 python3.9[195187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:29 compute-1 sudo[195184]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:29 compute-1 sudo[195263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dikqblvfasdqvpiffkmvbtnipksqfgir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063928.9952333-3597-25410485697503/AnsiballZ_file.py'
Nov 25 09:45:29 compute-1 sudo[195263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:29 compute-1 python3.9[195265]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:29 compute-1 sudo[195263]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:29.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:30 compute-1 sudo[195415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zchdqtfmazlbfjkvfmfecrulcwbpgopz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063929.833739-3633-136530595691958/AnsiballZ_stat.py'
Nov 25 09:45:30 compute-1 sudo[195415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:30 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84280031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:30 compute-1 python3.9[195417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:30 compute-1 sudo[195415]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:30 compute-1 sudo[195493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drehgclptsabsuoibuvhshgzyvlredlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063929.833739-3633-136530595691958/AnsiballZ_file.py'
Nov 25 09:45:30 compute-1 sudo[195493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:30.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:30 compute-1 python3.9[195495]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.bu9x6r1r recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:30 compute-1 sudo[195493]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:30 compute-1 auditd[675]: Audit daemon rotating log files
Nov 25 09:45:30 compute-1 ceph-mon[79643]: pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:45:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:45:30 compute-1 sudo[195645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkenhoapumgtwehlgcgkmwieilgpkrhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063930.669888-3669-158271840025961/AnsiballZ_stat.py'
Nov 25 09:45:30 compute-1 sudo[195645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:30 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c00c920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:31 compute-1 python3.9[195647]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:31 compute-1 sudo[195645]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:31 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c00c920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:31 compute-1 sudo[195724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pumligwgiyuhbylpxwnjxikxsojrnquz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063930.669888-3669-158271840025961/AnsiballZ_file.py'
Nov 25 09:45:31 compute-1 sudo[195724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:31 compute-1 python3.9[195726]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:31 compute-1 sudo[195724]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:31 compute-1 sudo[195876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsmicffmzwwlwtndpffjtrallthcwdqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063931.5810492-3708-32247858572345/AnsiballZ_command.py'
Nov 25 09:45:31 compute-1 sudo[195876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:31.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:31 compute-1 python3.9[195878]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:45:31 compute-1 sudo[195876]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:32 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094532 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:45:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:32.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:32 compute-1 sudo[196029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egucyktfvlntzkjgnanjmojrhckgkdtp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764063932.240133-3732-193520790440527/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 09:45:32 compute-1 sudo[196029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:32 compute-1 python3[196031]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 09:45:32 compute-1 sudo[196029]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:32 compute-1 ceph-mon[79643]: pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:45:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:32 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84280031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:33 compute-1 sudo[196181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvvgwkfzhclzgwtjcguovwhcltsscjqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063932.852184-3756-228078624141179/AnsiballZ_stat.py'
Nov 25 09:45:33 compute-1 sudo[196181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:33 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c00c920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:33 compute-1 python3.9[196183]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:33 compute-1 sudo[196181]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:33 compute-1 sudo[196260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkrwjehirlmfswnhblrzqwiejuafhqxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063932.852184-3756-228078624141179/AnsiballZ_file.py'
Nov 25 09:45:33 compute-1 sudo[196260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:33 compute-1 python3.9[196262]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:33 compute-1 sudo[196260]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:33.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:33 compute-1 sudo[196412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oksscufdbvobmgcicgafnntrlljrrutk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063933.7168803-3792-164163174794905/AnsiballZ_stat.py'
Nov 25 09:45:33 compute-1 sudo[196412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:34 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c00c920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:34 compute-1 python3.9[196414]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:34 compute-1 sudo[196412]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:34 compute-1 sudo[196490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svmrxpufoxddjwitodjynztyybmjcoks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063933.7168803-3792-164163174794905/AnsiballZ_file.py'
Nov 25 09:45:34 compute-1 sudo[196490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:34 compute-1 python3.9[196492]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:34 compute-1 sudo[196490]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:34.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:34 compute-1 ceph-mon[79643]: pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:45:34 compute-1 sudo[196642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfkohvemegmqbewsbwffdredwampuncq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063934.5822623-3828-207689693731632/AnsiballZ_stat.py'
Nov 25 09:45:34 compute-1 sudo[196642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:34 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:34 compute-1 python3.9[196644]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:34 compute-1 sudo[196642]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:35 compute-1 sudo[196720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sslmrnumqjulqespwtzxmedyatnjjlqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063934.5822623-3828-207689693731632/AnsiballZ_file.py'
Nov 25 09:45:35 compute-1 sudo[196720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:35 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84280040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:35 compute-1 python3.9[196722]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:35 compute-1 sudo[196720]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:35 compute-1 sudo[196873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blpwnzlpexlzrekhkjccxwwhkuophphi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063935.448906-3864-4037422923884/AnsiballZ_stat.py'
Nov 25 09:45:35 compute-1 sudo[196873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:35 compute-1 python3.9[196875]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:35 compute-1 sudo[196873]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:35.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:35 compute-1 sudo[196951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxlvutprhxswskdyszhijexcrakoekld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063935.448906-3864-4037422923884/AnsiballZ_file.py'
Nov 25 09:45:35 compute-1 sudo[196951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:36 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c00c920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:36 compute-1 python3.9[196953]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:36 compute-1 sudo[196951]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:36.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:36 compute-1 sudo[197103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idvankldxhaqczjmhdymkdkcphbewzyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063936.3641195-3900-154660670139789/AnsiballZ_stat.py'
Nov 25 09:45:36 compute-1 sudo[197103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:36 compute-1 ceph-mon[79643]: pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:45:36 compute-1 python3.9[197105]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:36 compute-1 sudo[197103]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:36 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c00c920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:37 compute-1 sudo[197228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqpuqtkegnwnwsjvyhoegnufrxjovilm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063936.3641195-3900-154660670139789/AnsiballZ_copy.py'
Nov 25 09:45:37 compute-1 sudo[197228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:37 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:37 compute-1 python3.9[197230]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063936.3641195-3900-154660670139789/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:37 compute-1 sudo[197228]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:37 compute-1 sudo[197391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oglkcvnfxcrhfgoklsijgpwfrphvlkwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063937.43214-3945-38213930403932/AnsiballZ_file.py'
Nov 25 09:45:37 compute-1 sudo[197391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:37 compute-1 podman[197355]: 2025-11-25 09:45:37.650502639 +0000 UTC m=+0.038473324 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 25 09:45:37 compute-1 python3.9[197399]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:37 compute-1 sudo[197391]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:37.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:38 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84280040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:38 compute-1 sudo[197550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyuooboxnbftbjmpvtslwqvzzkrbwvil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063937.9852924-3969-29998276829474/AnsiballZ_command.py'
Nov 25 09:45:38 compute-1 sudo[197550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:38 compute-1 python3.9[197552]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:45:38 compute-1 sudo[197550]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:38.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:38 compute-1 ceph-mon[79643]: pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:45:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:38 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c00c920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:38 compute-1 sudo[197705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcgqjagamlsyqsbwknazgmsltulmrcff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063938.61748-3993-44106341796292/AnsiballZ_blockinfile.py'
Nov 25 09:45:38 compute-1 sudo[197705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:39 compute-1 python3.9[197707]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:39 compute-1 sudo[197705]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:39 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:39 compute-1 sudo[197858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nilhynaunojfvcrlgcupzbtjapnfyhdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063939.3758762-4020-18277365163283/AnsiballZ_command.py'
Nov 25 09:45:39 compute-1 sudo[197858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:39 compute-1 python3.9[197860]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:45:39 compute-1 sudo[197858]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:39 compute-1 ceph-mon[79643]: pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:45:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:39.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:40 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:40 compute-1 sudo[198011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcgkygbyubaifqlgsgimfdzbjwlpkrud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063939.9254265-4044-248340443672212/AnsiballZ_stat.py'
Nov 25 09:45:40 compute-1 sudo[198011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:40 compute-1 python3.9[198013]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:45:40 compute-1 sudo[198011]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:40 compute-1 sudo[198020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:45:40 compute-1 sudo[198020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:40 compute-1 sudo[198020]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:40.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:40 compute-1 sudo[198190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avbgrdfzowrzvaikupndubopbaeyhddj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063940.4761846-4068-161249575491702/AnsiballZ_command.py'
Nov 25 09:45:40 compute-1 sudo[198190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:40 compute-1 python3.9[198192]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:45:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:40 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:40 compute-1 sudo[198190]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:41 compute-1 sudo[198346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syhhheoburwrttbvfgbkkglboonopjfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063941.0515182-4092-132280897149397/AnsiballZ_file.py'
Nov 25 09:45:41 compute-1 sudo[198346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:41 compute-1 python3.9[198348]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:41 compute-1 sudo[198346]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:41.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:41 compute-1 sudo[198498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msgfgflfffnpcppihckwelnjdxwmtnad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063941.7796185-4116-222683415851207/AnsiballZ_stat.py'
Nov 25 09:45:41 compute-1 sudo[198498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:42 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:42 compute-1 python3.9[198500]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:42 compute-1 sudo[198498]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:42 compute-1 ceph-mon[79643]: pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:45:42 compute-1 sudo[198621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnsziraeaxqkotddfnaeykmaeirqjaeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063941.7796185-4116-222683415851207/AnsiballZ_copy.py'
Nov 25 09:45:42 compute-1 sudo[198621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:42.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:42 compute-1 python3.9[198623]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063941.7796185-4116-222683415851207/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:42 compute-1 sudo[198621]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:42 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:42 compute-1 sudo[198773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udszkjvszknszhrzpmosjcrmwwneznqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063942.715188-4161-57284225149595/AnsiballZ_stat.py'
Nov 25 09:45:42 compute-1 sudo[198773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:43 compute-1 python3.9[198775]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:43 compute-1 sudo[198773]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:43 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:43 compute-1 sudo[198897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qneqfknyzgwwxklsdknxcbqxsdantjxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063942.715188-4161-57284225149595/AnsiballZ_copy.py'
Nov 25 09:45:43 compute-1 sudo[198897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:43 compute-1 python3.9[198899]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063942.715188-4161-57284225149595/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:43 compute-1 sudo[198897]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:43 compute-1 sudo[199049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jswhioluwfpgbiqdgphvolsqrucphkmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063943.6980393-4206-38834654609176/AnsiballZ_stat.py'
Nov 25 09:45:43 compute-1 sudo[199049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:43.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:44 compute-1 python3.9[199051]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:44 compute-1 sudo[199049]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:44 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:44 compute-1 sudo[199172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqclvelvbhkmxpkcpijgxpejiqioofmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063943.6980393-4206-38834654609176/AnsiballZ_copy.py'
Nov 25 09:45:44 compute-1 sudo[199172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:44 compute-1 ceph-mon[79643]: pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:44 compute-1 podman[199174]: 2025-11-25 09:45:44.366997821 +0000 UTC m=+0.061886858 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 09:45:44 compute-1 python3.9[199175]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063943.6980393-4206-38834654609176/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:44.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:44 compute-1 sudo[199172]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:44 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:45 compute-1 sudo[199348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuucmmfhbxsgwuhmqckcksmyzbxgunwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063944.6471777-4251-13671821385888/AnsiballZ_systemd.py'
Nov 25 09:45:45 compute-1 sudo[199348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:45 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:45 compute-1 python3.9[199350]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:45:45 compute-1 systemd[1]: Reloading.
Nov 25 09:45:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:45:45 compute-1 systemd-rc-local-generator[199376]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:45 compute-1 systemd-sysv-generator[199379]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:45 compute-1 systemd[1]: Reached target edpm_libvirt.target.
Nov 25 09:45:45 compute-1 sudo[199348]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:45.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:45 compute-1 sudo[199540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oarsexchzrvefjzxgwxwnviruylnpplk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063945.7841482-4275-251380490894235/AnsiballZ_systemd.py'
Nov 25 09:45:45 compute-1 sudo[199540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:46 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:46 compute-1 python3.9[199542]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 09:45:46 compute-1 systemd[1]: Reloading.
Nov 25 09:45:46 compute-1 systemd-sysv-generator[199567]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:46 compute-1 systemd-rc-local-generator[199563]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:46 compute-1 ceph-mon[79643]: pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:46.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:46 compute-1 systemd[1]: Reloading.
Nov 25 09:45:46 compute-1 systemd-sysv-generator[199601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:46 compute-1 systemd-rc-local-generator[199598]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:46 compute-1 sudo[199540]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:46 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408006520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:47 compute-1 sshd-session[143057]: Connection closed by 192.168.122.30 port 33130
Nov 25 09:45:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:47 compute-1 sshd-session[143054]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:45:47 compute-1 systemd[1]: session-50.scope: Deactivated successfully.
Nov 25 09:45:47 compute-1 systemd[1]: session-50.scope: Consumed 2min 22.632s CPU time.
Nov 25 09:45:47 compute-1 systemd-logind[746]: Session 50 logged out. Waiting for processes to exit.
Nov 25 09:45:47 compute-1 systemd-logind[746]: Removed session 50.
Nov 25 09:45:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:47 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:47.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:48 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c00d340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:48 compute-1 ceph-mon[79643]: pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:45:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:48.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:48 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:49 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:49.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:50 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:50 compute-1 ceph-mon[79643]: pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:50.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:50 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c00d340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:51 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c00d340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:51.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:52 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408006580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:52 compute-1 ceph-mon[79643]: pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:45:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:52.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:52 compute-1 sshd-session[199642]: Accepted publickey for zuul from 192.168.122.30 port 37460 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:45:52 compute-1 systemd-logind[746]: New session 51 of user zuul.
Nov 25 09:45:52 compute-1 systemd[1]: Started Session 51 of User zuul.
Nov 25 09:45:52 compute-1 sshd-session[199642]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:45:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:52 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e4003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:53 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c00d340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:53 compute-1 python3.9[199796]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:45:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:53.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:54 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:54 compute-1 ceph-mon[79643]: pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:54.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:54 compute-1 python3.9[199950]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:45:54 compute-1 network[199967]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:45:54 compute-1 network[199968]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:45:54 compute-1 network[199969]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:45:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:54 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080065a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:55 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080065a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:55.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:56 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080065a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:56 compute-1 ceph-mon[79643]: pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:56.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:56 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080065a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:57 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:57.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:58 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080065a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:58 compute-1 sudo[200242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbloidacxlkqbrgxdgwqxtikesimhatq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063957.9081314-102-224036648442677/AnsiballZ_setup.py'
Nov 25 09:45:58 compute-1 sudo[200242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:58 compute-1 python3.9[200244]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:45:58 compute-1 ceph-mon[79643]: pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:45:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:45:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:58.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:45:58 compute-1 sudo[200242]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:58 compute-1 sudo[200326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpmnevrpmfayreeiovjeklihppltktjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063957.9081314-102-224036648442677/AnsiballZ_dnf.py'
Nov 25 09:45:58 compute-1 sudo[200326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:58 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:59 compute-1 python3.9[200328]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:45:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:45:59 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:45:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:59.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:00 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:00 compute-1 ceph-mon[79643]: pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:46:00 compute-1 sudo[200331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:46:00 compute-1 sudo[200331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:00 compute-1 sudo[200331]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:46:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:00.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:46:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:00 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080065c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:01 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:01.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:02 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:02 compute-1 ceph-mon[79643]: pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:46:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:46:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:02.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:46:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:02 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:03 compute-1 sudo[200326]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:03 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080065e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:03 compute-1 sudo[200507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taljrteqmvlzrcmjcurruxfokigcogkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063963.3557086-138-143451553788945/AnsiballZ_stat.py'
Nov 25 09:46:03 compute-1 sudo[200507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:03 compute-1 python3.9[200509]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:03 compute-1 sudo[200507]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:03.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:04 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:04 compute-1 sudo[200659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnpqunpwnekwlpwkzlrthiyruaeksbza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063964.0524292-168-221622300681309/AnsiballZ_command.py'
Nov 25 09:46:04 compute-1 sudo[200659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:04 compute-1 ceph-mon[79643]: pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:04 compute-1 python3.9[200661]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:46:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:04.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:04 compute-1 sudo[200659]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:04 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:04 compute-1 sudo[200812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gljmoiiycgjiojwkmyvynvjourzovzjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063964.7750432-198-24764145784439/AnsiballZ_stat.py'
Nov 25 09:46:04 compute-1 sudo[200812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:46:04.993 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:46:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:46:04.993 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:46:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:46:04.993 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:46:05 compute-1 python3.9[200814]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:05 compute-1 sudo[200812]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:05 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:05 compute-1 sudo[200965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irqjjuqfkmgwbxbphpdvhjlikefhqxum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063965.2427788-222-198872630342279/AnsiballZ_command.py'
Nov 25 09:46:05 compute-1 sudo[200965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:05 compute-1 python3.9[200967]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:46:05 compute-1 sudo[200965]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:05 compute-1 sudo[201118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsetgluodymppxakinailyjigdmstaou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063965.740107-246-12682042122660/AnsiballZ_stat.py'
Nov 25 09:46:05 compute-1 sudo[201118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:05.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:06 compute-1 python3.9[201120]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:06 compute-1 sudo[201118]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:06 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:06 compute-1 sudo[201241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckaucrordrggowmnwhjtebshudxaszgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063965.740107-246-12682042122660/AnsiballZ_copy.py'
Nov 25 09:46:06 compute-1 sudo[201241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:06 compute-1 ceph-mon[79643]: pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:06.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:06 compute-1 python3.9[201243]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063965.740107-246-12682042122660/.source.iscsi _original_basename=.yl7uwzxr follow=False checksum=715bf55f99b2586072debb87217212266103a82d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:06 compute-1 sudo[201241]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:06 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:06 compute-1 sudo[201393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbfgqkeuzugqjdfavbeelcffaqrwjwha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063966.7083373-291-77433809246068/AnsiballZ_file.py'
Nov 25 09:46:06 compute-1 sudo[201393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:07 compute-1 python3.9[201395]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:07 compute-1 sudo[201393]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:07 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:07 compute-1 sudo[201546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufccbqwxjxjkdrqqdrtsftdjaiacusii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063967.3492508-315-91759609619322/AnsiballZ_lineinfile.py'
Nov 25 09:46:07 compute-1 sudo[201546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:07 compute-1 podman[201548]: 2025-11-25 09:46:07.726064137 +0000 UTC m=+0.042141398 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 25 09:46:07 compute-1 python3.9[201549]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:07 compute-1 sudo[201546]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:07.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:08 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:08 compute-1 ceph-mon[79643]: pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:46:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:08.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:08 compute-1 sudo[201716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbsbxgftljpehjmwaoaiyiaflcoxoxvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063968.0543094-342-188929923692401/AnsiballZ_systemd_service.py'
Nov 25 09:46:08 compute-1 sudo[201716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:08 compute-1 python3.9[201718]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:46:08 compute-1 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 25 09:46:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:08 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:08 compute-1 sudo[201716]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:09 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:09 compute-1 sudo[201873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfphlflbegxaruxcjjgrpildnrodahrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063969.0783913-366-197919031307897/AnsiballZ_systemd_service.py'
Nov 25 09:46:09 compute-1 sudo[201873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:09 compute-1 python3.9[201875]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:46:09 compute-1 systemd[1]: Reloading.
Nov 25 09:46:09 compute-1 systemd-rc-local-generator[201897]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:09 compute-1 systemd-sysv-generator[201901]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:09 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 09:46:09 compute-1 systemd[1]: Starting Open-iSCSI...
Nov 25 09:46:09 compute-1 kernel: Loading iSCSI transport class v2.0-870.
Nov 25 09:46:09 compute-1 systemd[1]: Started Open-iSCSI.
Nov 25 09:46:09 compute-1 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 25 09:46:09 compute-1 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 25 09:46:09 compute-1 sudo[201873]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:09.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:10 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:10 compute-1 ceph-mon[79643]: pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:10.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:10 compute-1 sudo[202074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjtkjhnitktgfbgmjbvolveptgxukxuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063970.3341656-399-85365268681575/AnsiballZ_service_facts.py'
Nov 25 09:46:10 compute-1 sudo[202074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:10 compute-1 python3.9[202076]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:46:10 compute-1 network[202093]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:46:10 compute-1 network[202094]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:46:10 compute-1 network[202095]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:46:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:10 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408006660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:11 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:11.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:12 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:12 compute-1 ceph-mon[79643]: pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:46:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:12.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:12 compute-1 sudo[202074]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:12 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:13 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408006680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:13.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:14 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:14 compute-1 sudo[202367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tloqiqczwjqfddjlxvxbyqkxheqslado ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063974.069677-429-11859644884640/AnsiballZ_file.py'
Nov 25 09:46:14 compute-1 sudo[202367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:14 compute-1 python3.9[202369]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 09:46:14 compute-1 sudo[202367]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:14 compute-1 ceph-mon[79643]: pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:46:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:14.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:46:14 compute-1 podman[202446]: 2025-11-25 09:46:14.811064546 +0000 UTC m=+0.062126249 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:46:14 compute-1 sudo[202542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwrfxkberhscantrffolshuovsgglilc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063974.5848348-453-191829999341209/AnsiballZ_modprobe.py'
Nov 25 09:46:14 compute-1 sudo[202542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:14 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:15 compute-1 python3.9[202544]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 25 09:46:15 compute-1 sudo[202542]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:15 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:15 compute-1 sudo[202699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgjxfhehepbqulixkzzvraxtupokdzom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063975.2216332-477-191319779459675/AnsiballZ_stat.py'
Nov 25 09:46:15 compute-1 sudo[202699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:46:15 compute-1 python3.9[202701]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:15 compute-1 sudo[202699]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:15 compute-1 sudo[202822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gddgfxyvvwokhinlomsgsrauyaptmopv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063975.2216332-477-191319779459675/AnsiballZ_copy.py'
Nov 25 09:46:15 compute-1 sudo[202822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:15.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:15 compute-1 sudo[202825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:46:15 compute-1 sudo[202825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:15 compute-1 sudo[202825]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:15 compute-1 python3.9[202824]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063975.2216332-477-191319779459675/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:15 compute-1 sudo[202822]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:15 compute-1 sudo[202850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:46:15 compute-1 sudo[202850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:16 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:16 compute-1 sudo[202850]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:16 compute-1 sudo[203053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmjtxbmcmdgumrnvgdtpzllqcqkzvzte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063976.2304404-525-57745762621120/AnsiballZ_lineinfile.py'
Nov 25 09:46:16 compute-1 sudo[203053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:16 compute-1 ceph-mon[79643]: pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:16.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:16 compute-1 python3.9[203055]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:16 compute-1 sudo[203053]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:16 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:17 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:17 compute-1 sudo[203206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jimljddqvmgvqracsdjekrsbcnfhbsor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063976.8087318-549-197108302103859/AnsiballZ_systemd.py'
Nov 25 09:46:17 compute-1 sudo[203206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:17 compute-1 python3.9[203208]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:46:17 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 09:46:17 compute-1 systemd[1]: Stopped Load Kernel Modules.
Nov 25 09:46:17 compute-1 systemd[1]: Stopping Load Kernel Modules...
Nov 25 09:46:17 compute-1 systemd[1]: Starting Load Kernel Modules...
Nov 25 09:46:17 compute-1 systemd[1]: Finished Load Kernel Modules.
Nov 25 09:46:17 compute-1 sudo[203206]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:17 compute-1 sudo[203362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnzsseqqyzoojmsfvlpcbktrksvhoxkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063977.7390475-573-65743599092436/AnsiballZ_file.py'
Nov 25 09:46:17 compute-1 sudo[203362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:17.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:18 compute-1 python3.9[203364]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:46:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:18 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:18 compute-1 sudo[203362]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:18.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:18 compute-1 sudo[203514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mburzdtxvnzvwugprtjjyxactlklsnse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063978.364901-600-130834727606145/AnsiballZ_stat.py'
Nov 25 09:46:18 compute-1 sudo[203514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:18 compute-1 ceph-mon[79643]: pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:46:18 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:18 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:18 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:46:18 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:46:18 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:18 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:18 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:46:18 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:46:18 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:46:18 compute-1 python3.9[203516]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:18 compute-1 sudo[203514]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:18 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080066c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:19 compute-1 sudo[203666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpofjrssbqpbszdviigpyqljqxqqznnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063978.9762342-627-249974591145350/AnsiballZ_stat.py'
Nov 25 09:46:19 compute-1 sudo[203666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:19 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:19 compute-1 python3.9[203669]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:19 compute-1 sudo[203666]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:19 compute-1 sudo[203819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tofvitamgaseudfeqfxjiyjwvpnlhdlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063979.4993434-651-3945009626529/AnsiballZ_stat.py'
Nov 25 09:46:19 compute-1 sudo[203819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:19 compute-1 python3.9[203821]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:19 compute-1 sudo[203819]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:19.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:20 compute-1 sudo[203942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbmtrchlskpfjeuwsgwliznztpubffsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063979.4993434-651-3945009626529/AnsiballZ_copy.py'
Nov 25 09:46:20 compute-1 sudo[203942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:20 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:20 compute-1 python3.9[203944]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063979.4993434-651-3945009626529/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:20 compute-1 sudo[203942]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:20 compute-1 sudo[203995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:46:20 compute-1 sudo[203995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:20 compute-1 sudo[203995]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:20.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:20 compute-1 ceph-mon[79643]: pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:20 compute-1 sudo[204119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbrarvguetpehykonxladxjzjleqmdtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063980.4375174-696-222625941845291/AnsiballZ_command.py'
Nov 25 09:46:20 compute-1 sudo[204119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:20 compute-1 python3.9[204121]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:46:20 compute-1 sudo[204119]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:20 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:21 compute-1 sudo[204272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-civmpvwqaeaookbqbpyrzvzdyypfnjze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063980.9836066-720-228376142164052/AnsiballZ_lineinfile.py'
Nov 25 09:46:21 compute-1 sudo[204272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:21 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84080066e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:21 compute-1 python3.9[204274]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:21 compute-1 sudo[204272]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:21 compute-1 sudo[204276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:46:21 compute-1 sudo[204276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:21 compute-1 sudo[204276]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:21 compute-1 sudo[204450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zetruysbyyeogjvfdmjxpwmqrrhaxxer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063981.5144892-744-215762364966145/AnsiballZ_replace.py'
Nov 25 09:46:21 compute-1 sudo[204450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:21.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:21 compute-1 python3.9[204452]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:21 compute-1 sudo[204450]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:22 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:22 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:22 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:22 compute-1 ceph-mon[79643]: pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:46:22 compute-1 sudo[204602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjvdrkpjshhqwkacsafdaqrubwreredb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063982.1347919-768-40075847491830/AnsiballZ_replace.py'
Nov 25 09:46:22 compute-1 sudo[204602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:22 compute-1 python3.9[204604]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:22 compute-1 sudo[204602]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:22.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:22 compute-1 sudo[204754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-finxwsdirygfuaejtobirpaubqbwzmxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063982.6789782-795-210805820128628/AnsiballZ_lineinfile.py'
Nov 25 09:46:22 compute-1 sudo[204754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:22 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:23 compute-1 python3.9[204756]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:23 compute-1 sudo[204754]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:23 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428004dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:23 compute-1 sudo[204907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crrjdiuzvyjctzgdoymvuvpmrlvjbhpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063983.1079934-795-48311303974541/AnsiballZ_lineinfile.py'
Nov 25 09:46:23 compute-1 sudo[204907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:23 compute-1 python3.9[204909]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:23 compute-1 sudo[204907]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:23 compute-1 sudo[205059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogwyevvqjtpbklcgqwtkldzqrdtttumj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063983.5478728-795-88959267173629/AnsiballZ_lineinfile.py'
Nov 25 09:46:23 compute-1 sudo[205059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:23 compute-1 python3.9[205061]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:23 compute-1 sudo[205059]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:23.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:24 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408006700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:24 compute-1 sudo[205211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frivlmxlzacezkixwloaplejpbfhpitv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063983.9855826-795-148478275526480/AnsiballZ_lineinfile.py'
Nov 25 09:46:24 compute-1 sudo[205211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:24 compute-1 python3.9[205213]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:24 compute-1 sudo[205211]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:24 compute-1 ceph-mon[79643]: pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:24.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:24 compute-1 sudo[205363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpebcnlywiaymbfhluqhngmjysparpda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063984.487206-882-211682361033577/AnsiballZ_stat.py'
Nov 25 09:46:24 compute-1 sudo[205363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:24 compute-1 python3.9[205365]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:24 compute-1 sudo[205363]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:24 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f841c005f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:25 compute-1 sudo[205517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heplglynoxsloffplagwqoirgasoxpqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063984.9906554-906-152455169202140/AnsiballZ_file.py'
Nov 25 09:46:25 compute-1 sudo[205517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:25 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:25 compute-1 python3.9[205519]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:25 compute-1 sudo[205517]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:25 compute-1 sudo[205670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taumrdtxlbzvsbqooilzkumrdtkixfff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063985.6272523-933-141996344415009/AnsiballZ_file.py'
Nov 25 09:46:25 compute-1 sudo[205670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:25.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:25 compute-1 python3.9[205672]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:46:25 compute-1 sudo[205670]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:26 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:26 compute-1 sudo[205822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cydquwnacvbafogrkfrwxnsglqrlhfyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063986.1394258-957-113630292454216/AnsiballZ_stat.py'
Nov 25 09:46:26 compute-1 sudo[205822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:26 compute-1 ceph-mon[79643]: pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:26 compute-1 python3.9[205824]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:26 compute-1 sudo[205822]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:26.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:26 compute-1 sudo[205900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqyyhbzgdhglsgqtuglkwspxsemkinlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063986.1394258-957-113630292454216/AnsiballZ_file.py'
Nov 25 09:46:26 compute-1 sudo[205900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:26 compute-1 python3.9[205902]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:46:26 compute-1 sudo[205900]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:26 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:27 compute-1 sudo[206052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixfahrbxpjjprutvcqemwudiepvjhxhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063986.9226918-957-254227409701939/AnsiballZ_stat.py'
Nov 25 09:46:27 compute-1 sudo[206052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:27 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8428005ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:27 compute-1 python3.9[206054]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:27 compute-1 sudo[206052]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:27 compute-1 sudo[206132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdhictwbfmfogvkqjekzsehtcortsazo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063986.9226918-957-254227409701939/AnsiballZ_file.py'
Nov 25 09:46:27 compute-1 sudo[206132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:27 compute-1 python3.9[206134]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:46:27 compute-1 sudo[206132]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:27.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:28 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408006740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:28 compute-1 sudo[206284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myxladxhmgcyknbdaevmazrxwluccnxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063987.9719183-1026-61075378723924/AnsiballZ_file.py'
Nov 25 09:46:28 compute-1 sudo[206284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:28 compute-1 python3.9[206286]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:28 compute-1 sudo[206284]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:28 compute-1 ceph-mon[79643]: pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:46:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:28.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:28 compute-1 sudo[206436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbgnnmasryenrjflaculwcrniynpbsyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063988.5407307-1052-4820581950777/AnsiballZ_stat.py'
Nov 25 09:46:28 compute-1 sudo[206436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:28 compute-1 python3.9[206438]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:28 compute-1 sudo[206436]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:28 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e40025d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:29 compute-1 sudo[206515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzrkvmmcezaanznfdfnzenobaaaoxnyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063988.5407307-1052-4820581950777/AnsiballZ_file.py'
Nov 25 09:46:29 compute-1 sudo[206515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:29 compute-1 python3.9[206517]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:29 compute-1 sudo[206515]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:29 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:29 compute-1 sudo[206668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhksttodokaseqkpzdbtcpsddamehpfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063989.365737-1086-11452366432938/AnsiballZ_stat.py'
Nov 25 09:46:29 compute-1 sudo[206668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:29 compute-1 python3.9[206670]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:29 compute-1 sudo[206668]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:29 compute-1 sudo[206746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwyschifclumjvmgorvtezyegrofyckf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063989.365737-1086-11452366432938/AnsiballZ_file.py'
Nov 25 09:46:29 compute-1 sudo[206746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:29.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:30 compute-1 python3.9[206748]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:30 compute-1 sudo[206746]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:30 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:30 compute-1 ceph-mon[79643]: pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:46:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:30.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:30 compute-1 sudo[206898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwpqeuocxpifpzdtklzmujcqsgxtnsfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063990.3620899-1122-129001162504021/AnsiballZ_systemd.py'
Nov 25 09:46:30 compute-1 sudo[206898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:30 compute-1 python3.9[206900]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:46:30 compute-1 systemd[1]: Reloading.
Nov 25 09:46:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:30 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408006740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:30 compute-1 systemd-rc-local-generator[206924]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:30 compute-1 systemd-sysv-generator[206928]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:31 compute-1 sudo[206898]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:31 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408006740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:31 compute-1 sudo[207088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aimafywzrqeylauxlbwhowjmoxdktxcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063991.3192272-1146-22534868064395/AnsiballZ_stat.py'
Nov 25 09:46:31 compute-1 sudo[207088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:31 compute-1 python3.9[207090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:31 compute-1 sudo[207088]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:31 compute-1 sudo[207166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouyumqjzoqgyqiyvnfzzdxywphlmpwit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063991.3192272-1146-22534868064395/AnsiballZ_file.py'
Nov 25 09:46:31 compute-1 sudo[207166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:31.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:32 compute-1 python3.9[207168]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:32 compute-1 sudo[207166]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:32 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:32 compute-1 ceph-mon[79643]: pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:46:32 compute-1 sudo[207318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnqvtbxqlmbmbeeotcroytpcmwfqjdfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063992.2750957-1182-114448733999092/AnsiballZ_stat.py'
Nov 25 09:46:32 compute-1 sudo[207318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:32.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:32 compute-1 python3.9[207320]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:32 compute-1 sudo[207318]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:32 compute-1 sudo[207396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqguwkmcyazdyqwapaqgvkmfrohoyrbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063992.2750957-1182-114448733999092/AnsiballZ_file.py'
Nov 25 09:46:32 compute-1 sudo[207396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:32 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:32 compute-1 python3.9[207398]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:32 compute-1 sudo[207396]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:33 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:33 compute-1 sudo[207549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsnshxdpvvzxrjrslkdbicmkhqzwulsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063993.3738108-1218-141469341491443/AnsiballZ_systemd.py'
Nov 25 09:46:33 compute-1 sudo[207549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:33 compute-1 python3.9[207551]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:46:33 compute-1 systemd[1]: Reloading.
Nov 25 09:46:33 compute-1 systemd-rc-local-generator[207572]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:33 compute-1 systemd-sysv-generator[207578]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:33.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:34 compute-1 systemd[1]: Starting Create netns directory...
Nov 25 09:46:34 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 09:46:34 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 09:46:34 compute-1 systemd[1]: Finished Create netns directory.
Nov 25 09:46:34 compute-1 sudo[207549]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:34 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e40025d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:34 compute-1 ceph-mon[79643]: pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:34.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:34 compute-1 sudo[207742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvefmvqglyvdwoymfrsurqyclbsifoxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063994.5451894-1248-128235247250470/AnsiballZ_file.py'
Nov 25 09:46:34 compute-1 sudo[207742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:34 compute-1 python3.9[207744]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:46:34 compute-1 sudo[207742]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:34 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:35 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:35 compute-1 sudo[207895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnxlsdysuwnpnupehyxiovohdmibdugr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063995.1088336-1272-182544758090708/AnsiballZ_stat.py'
Nov 25 09:46:35 compute-1 sudo[207895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:35 compute-1 python3.9[207897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:35 compute-1 sudo[207895]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:35 compute-1 sudo[208018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycjnyxokbojftdbrrntqlwpyyfnnxuup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063995.1088336-1272-182544758090708/AnsiballZ_copy.py'
Nov 25 09:46:35 compute-1 sudo[208018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:35 compute-1 python3.9[208020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063995.1088336-1272-182544758090708/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:46:35 compute-1 sudo[208018]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:35.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:36 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:36 compute-1 ceph-mon[79643]: pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:46:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:36.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:46:36 compute-1 sudo[208170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twqxswrlppmqfuwztedilcbbsxbanhrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063996.4553916-1323-254404293663431/AnsiballZ_file.py'
Nov 25 09:46:36 compute-1 sudo[208170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:36 compute-1 python3.9[208172]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:46:36 compute-1 sudo[208170]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:36 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e40025d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:37 compute-1 sudo[208322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-motfqbigrzbjzombvjqrimsxvwfjnsuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063997.005849-1347-38222224122167/AnsiballZ_stat.py'
Nov 25 09:46:37 compute-1 sudo[208322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:37 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e40025d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:37 compute-1 python3.9[208324]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:37 compute-1 sudo[208322]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:37 compute-1 sudo[208446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnjkkzovxubpzsavdvdbgsawcyurjbmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063997.005849-1347-38222224122167/AnsiballZ_copy.py'
Nov 25 09:46:37 compute-1 sudo[208446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:37 compute-1 python3.9[208448]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063997.005849-1347-38222224122167/.source.json _original_basename=.qfciec9u follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:37 compute-1 sudo[208446]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:37.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:38 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:38 compute-1 sudo[208607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iitubkklnarjtdwjpygpauncdkvciwvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063998.0029433-1392-211511068354485/AnsiballZ_file.py'
Nov 25 09:46:38 compute-1 sudo[208607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:38 compute-1 podman[208572]: 2025-11-25 09:46:38.229002908 +0000 UTC m=+0.049962113 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:46:38 compute-1 python3.9[208614]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:38 compute-1 sudo[208607]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:38 compute-1 ceph-mon[79643]: pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:46:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:38.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:38 compute-1 sudo[208766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afbhvmjviaedjowqgsylrfgspmkgnesw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063998.5665362-1416-142765266382646/AnsiballZ_stat.py'
Nov 25 09:46:38 compute-1 sudo[208766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:38 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:38 compute-1 sudo[208766]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:39 compute-1 sudo[208889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzsxbbwwvzflykmbfsjeeoxutingumpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063998.5665362-1416-142765266382646/AnsiballZ_copy.py'
Nov 25 09:46:39 compute-1 sudo[208889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:39 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408006740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:39 compute-1 sudo[208889]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:39.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:39 compute-1 sudo[209042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmjkeqpjuwgnjgstvqdwebxyjmfjulqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063999.6952684-1467-123795974205800/AnsiballZ_container_config_data.py'
Nov 25 09:46:39 compute-1 sudo[209042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:40 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e40025d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:40 compute-1 python3.9[209044]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 25 09:46:40 compute-1 sudo[209042]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:40 compute-1 ceph-mon[79643]: pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:40 compute-1 sudo[209121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:46:40 compute-1 sudo[209121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:40 compute-1 sudo[209121]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:40.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:40 compute-1 sudo[209219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjrvsbabjgzdwjhjoieakqauslzzeagi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064000.398637-1494-24422542314473/AnsiballZ_container_config_hash.py'
Nov 25 09:46:40 compute-1 sudo[209219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:40 compute-1 python3.9[209221]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 09:46:40 compute-1 sudo[209219]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:40 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:41 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:41 compute-1 sudo[209372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqlkhlbxmunvvrxxshsftkwpthbovytz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064001.146134-1521-56044981387286/AnsiballZ_podman_container_info.py'
Nov 25 09:46:41 compute-1 sudo[209372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:41 compute-1 python3.9[209374]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 09:46:41 compute-1 sudo[209372]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:41.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:42 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8408006740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094642 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:46:42 compute-1 ceph-mon[79643]: pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:42.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:42 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83e40025d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:43 compute-1 sudo[209543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-limhmfnbtezbquacovhsulhxmjthimho ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764064002.8677807-1560-13282168697703/AnsiballZ_edpm_container_manage.py'
Nov 25 09:46:43 compute-1 sudo[209543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:43 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:43 compute-1 python3[209546]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 09:46:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:43.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:44 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:44 compute-1 ceph-mon[79643]: pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:46:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:44.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:44 compute-1 podman[209557]: 2025-11-25 09:46:44.59702934 +0000 UTC m=+1.152352912 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 09:46:44 compute-1 podman[209606]: 2025-11-25 09:46:44.691278677 +0000 UTC m=+0.030532914 container create 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 09:46:44 compute-1 podman[209606]: 2025-11-25 09:46:44.677311203 +0000 UTC m=+0.016565460 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 09:46:44 compute-1 python3[209546]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 09:46:44 compute-1 sudo[209543]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:44 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:45 compute-1 sudo[209792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbiobbsqcdbvgsbuqnfbaxzdzjnllnmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064004.9290128-1584-259444023065739/AnsiballZ_stat.py'
Nov 25 09:46:45 compute-1 sudo[209792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:45 compute-1 podman[209757]: 2025-11-25 09:46:45.15799094 +0000 UTC m=+0.062621260 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 25 09:46:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:45 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84340a7320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:45 compute-1 python3.9[209801]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:45 compute-1 sudo[209792]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:46:45 compute-1 sudo[209961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcyjtqjukaerxagadcgykyqzuzivxyap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064005.6050398-1611-9421546637836/AnsiballZ_file.py'
Nov 25 09:46:45 compute-1 sudo[209961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:45 compute-1 python3.9[209963]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:45 compute-1 sudo[209961]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:45.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:46 compute-1 sudo[210037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgypxgqebuuygfetpkmtoyrrdgtdjttu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064005.6050398-1611-9421546637836/AnsiballZ_stat.py'
Nov 25 09:46:46 compute-1 sudo[210037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:46 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438041420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:46 compute-1 python3.9[210039]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:46 compute-1 sudo[210037]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:46 compute-1 ceph-mon[79643]: pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:46:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:46.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:46 compute-1 sudo[210188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibqgiemzsacltwzjuezpwtvvcfwmjcwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064006.2914648-1611-204302627903118/AnsiballZ_copy.py'
Nov 25 09:46:46 compute-1 sudo[210188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:46 compute-1 python3.9[210190]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764064006.2914648-1611-204302627903118/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:46 compute-1 sudo[210188]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:46 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:46 compute-1 sudo[210264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbqcxivmtcsqkpywezpybaywueqrcybe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064006.2914648-1611-204302627903118/AnsiballZ_systemd.py'
Nov 25 09:46:46 compute-1 sudo[210264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:47 compute-1 python3.9[210266]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:46:47 compute-1 systemd[1]: Reloading.
Nov 25 09:46:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:47 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:47 compute-1 systemd-rc-local-generator[210288]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:47 compute-1 systemd-sysv-generator[210292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:47 compute-1 sudo[210264]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:47 compute-1 sudo[210377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbgdcgieeiypkewnqlygurhhgjyjwdqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064006.2914648-1611-204302627903118/AnsiballZ_systemd.py'
Nov 25 09:46:47 compute-1 sudo[210377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:47 compute-1 python3.9[210379]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:46:47 compute-1 systemd[1]: Reloading.
Nov 25 09:46:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:47.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:47 compute-1 systemd-sysv-generator[210409]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:48 compute-1 systemd-rc-local-generator[210406]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:48 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84340a7320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:48 compute-1 systemd[1]: Starting multipathd container...
Nov 25 09:46:48 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:46:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8466d66b5b6736978471b6cb19eb7acfb09bee9b744b9a1976ad32683452882a/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8466d66b5b6736978471b6cb19eb7acfb09bee9b744b9a1976ad32683452882a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:48 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc.
Nov 25 09:46:48 compute-1 podman[210418]: 2025-11-25 09:46:48.257815749 +0000 UTC m=+0.074633609 container init 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:46:48 compute-1 multipathd[210430]: + sudo -E kolla_set_configs
Nov 25 09:46:48 compute-1 sudo[210436]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 09:46:48 compute-1 sudo[210436]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 09:46:48 compute-1 sudo[210436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 09:46:48 compute-1 podman[210418]: 2025-11-25 09:46:48.285650117 +0000 UTC m=+0.102467967 container start 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:46:48 compute-1 podman[210418]: multipathd
Nov 25 09:46:48 compute-1 systemd[1]: Started multipathd container.
Nov 25 09:46:48 compute-1 sudo[210377]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:48 compute-1 multipathd[210430]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 09:46:48 compute-1 multipathd[210430]: INFO:__main__:Validating config file
Nov 25 09:46:48 compute-1 multipathd[210430]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 09:46:48 compute-1 multipathd[210430]: INFO:__main__:Writing out command to execute
Nov 25 09:46:48 compute-1 sudo[210436]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:48 compute-1 multipathd[210430]: ++ cat /run_command
Nov 25 09:46:48 compute-1 multipathd[210430]: + CMD='/usr/sbin/multipathd -d'
Nov 25 09:46:48 compute-1 multipathd[210430]: + ARGS=
Nov 25 09:46:48 compute-1 multipathd[210430]: + sudo kolla_copy_cacerts
Nov 25 09:46:48 compute-1 sudo[210454]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 09:46:48 compute-1 sudo[210454]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 09:46:48 compute-1 sudo[210454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 09:46:48 compute-1 podman[210437]: 2025-11-25 09:46:48.341631873 +0000 UTC m=+0.047858328 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 09:46:48 compute-1 sudo[210454]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:48 compute-1 multipathd[210430]: + [[ ! -n '' ]]
Nov 25 09:46:48 compute-1 multipathd[210430]: + . kolla_extend_start
Nov 25 09:46:48 compute-1 multipathd[210430]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 09:46:48 compute-1 multipathd[210430]: Running command: '/usr/sbin/multipathd -d'
Nov 25 09:46:48 compute-1 multipathd[210430]: + umask 0022
Nov 25 09:46:48 compute-1 multipathd[210430]: + exec /usr/sbin/multipathd -d
Nov 25 09:46:48 compute-1 systemd[1]: 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc-1d3453debefb6f0.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 09:46:48 compute-1 systemd[1]: 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc-1d3453debefb6f0.service: Failed with result 'exit-code'.
Nov 25 09:46:48 compute-1 multipathd[210430]: 2792.945378 | --------start up--------
Nov 25 09:46:48 compute-1 multipathd[210430]: 2792.945391 | read /etc/multipath.conf
Nov 25 09:46:48 compute-1 multipathd[210430]: 2792.949077 | path checkers start up
Nov 25 09:46:48 compute-1 ceph-mon[79643]: pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:46:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:46:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:48.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:46:48 compute-1 python3.9[210617]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:48 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438041420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:49 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:49 compute-1 sudo[210770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qljdzlevelyijcmliwbkalbnojzqwefz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064009.0721407-1719-137921667366450/AnsiballZ_command.py'
Nov 25 09:46:49 compute-1 sudo[210770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:49 compute-1 python3.9[210772]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:46:49 compute-1 sudo[210770]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:49 compute-1 sudo[210931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsjsgassyhggvmyzlthxguhjhcecdlfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064009.6217349-1743-147759742042460/AnsiballZ_systemd.py'
Nov 25 09:46:49 compute-1 sudo[210931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:49 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:46:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:49.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:50 compute-1 python3.9[210933]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:46:50 compute-1 systemd[1]: Stopping multipathd container...
Nov 25 09:46:50 compute-1 multipathd[210430]: 2794.717705 | exit (signal)
Nov 25 09:46:50 compute-1 multipathd[210430]: 2794.717744 | --------shut down-------
Nov 25 09:46:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:50 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:50 compute-1 systemd[1]: libpod-088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc.scope: Deactivated successfully.
Nov 25 09:46:50 compute-1 conmon[210430]: conmon 088298a790312dd86f7b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc.scope/container/memory.events
Nov 25 09:46:50 compute-1 podman[210937]: 2025-11-25 09:46:50.159427916 +0000 UTC m=+0.054896782 container died 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:46:50 compute-1 systemd[1]: 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc-1d3453debefb6f0.timer: Deactivated successfully.
Nov 25 09:46:50 compute-1 systemd[1]: Stopped /usr/bin/podman healthcheck run 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc.
Nov 25 09:46:50 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc-userdata-shm.mount: Deactivated successfully.
Nov 25 09:46:50 compute-1 systemd[1]: var-lib-containers-storage-overlay-8466d66b5b6736978471b6cb19eb7acfb09bee9b744b9a1976ad32683452882a-merged.mount: Deactivated successfully.
Nov 25 09:46:50 compute-1 podman[210937]: 2025-11-25 09:46:50.291461375 +0000 UTC m=+0.186930240 container cleanup 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:46:50 compute-1 podman[210937]: multipathd
Nov 25 09:46:50 compute-1 podman[210960]: multipathd
Nov 25 09:46:50 compute-1 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 25 09:46:50 compute-1 systemd[1]: Stopped multipathd container.
Nov 25 09:46:50 compute-1 systemd[1]: Starting multipathd container...
Nov 25 09:46:50 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:46:50 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8466d66b5b6736978471b6cb19eb7acfb09bee9b744b9a1976ad32683452882a/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:50 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8466d66b5b6736978471b6cb19eb7acfb09bee9b744b9a1976ad32683452882a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:50 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc.
Nov 25 09:46:50 compute-1 podman[210969]: 2025-11-25 09:46:50.429703774 +0000 UTC m=+0.078199020 container init 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 25 09:46:50 compute-1 multipathd[210981]: + sudo -E kolla_set_configs
Nov 25 09:46:50 compute-1 sudo[210988]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 09:46:50 compute-1 podman[210969]: 2025-11-25 09:46:50.448733118 +0000 UTC m=+0.097228343 container start 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 09:46:50 compute-1 sudo[210988]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 09:46:50 compute-1 sudo[210988]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 09:46:50 compute-1 podman[210969]: multipathd
Nov 25 09:46:50 compute-1 ceph-mon[79643]: pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:46:50 compute-1 systemd[1]: Started multipathd container.
Nov 25 09:46:50 compute-1 sudo[210931]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:50 compute-1 multipathd[210981]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 09:46:50 compute-1 multipathd[210981]: INFO:__main__:Validating config file
Nov 25 09:46:50 compute-1 multipathd[210981]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 09:46:50 compute-1 multipathd[210981]: INFO:__main__:Writing out command to execute
Nov 25 09:46:50 compute-1 sudo[210988]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:50 compute-1 multipathd[210981]: ++ cat /run_command
Nov 25 09:46:50 compute-1 multipathd[210981]: + CMD='/usr/sbin/multipathd -d'
Nov 25 09:46:50 compute-1 multipathd[210981]: + ARGS=
Nov 25 09:46:50 compute-1 multipathd[210981]: + sudo kolla_copy_cacerts
Nov 25 09:46:50 compute-1 sudo[211010]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 09:46:50 compute-1 sudo[211010]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 09:46:50 compute-1 sudo[211010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 09:46:50 compute-1 sudo[211010]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:50 compute-1 multipathd[210981]: + [[ ! -n '' ]]
Nov 25 09:46:50 compute-1 multipathd[210981]: + . kolla_extend_start
Nov 25 09:46:50 compute-1 multipathd[210981]: Running command: '/usr/sbin/multipathd -d'
Nov 25 09:46:50 compute-1 multipathd[210981]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 09:46:50 compute-1 multipathd[210981]: + umask 0022
Nov 25 09:46:50 compute-1 multipathd[210981]: + exec /usr/sbin/multipathd -d
Nov 25 09:46:50 compute-1 multipathd[210981]: 2795.111548 | --------start up--------
Nov 25 09:46:50 compute-1 multipathd[210981]: 2795.111560 | read /etc/multipath.conf
Nov 25 09:46:50 compute-1 multipathd[210981]: 2795.115099 | path checkers start up
Nov 25 09:46:50 compute-1 podman[210989]: 2025-11-25 09:46:50.529435846 +0000 UTC m=+0.069725109 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:46:50 compute-1 systemd[1]: 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc-3db5a10de23b15b8.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 09:46:50 compute-1 systemd[1]: 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc-3db5a10de23b15b8.service: Failed with result 'exit-code'.
Nov 25 09:46:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:46:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:50.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:46:50 compute-1 sudo[211168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gypfcnqljucxkhzpmhtoimwmieboufxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064010.6661885-1767-100448800871443/AnsiballZ_file.py'
Nov 25 09:46:50 compute-1 sudo[211168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:50 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84340a7320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:50 compute-1 python3.9[211170]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:51 compute-1 sudo[211168]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:51 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438041420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:51 compute-1 sudo[211321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgpnsfkjoekklzrvnunihlftkkaijkdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064011.5679045-1803-156520705528067/AnsiballZ_file.py'
Nov 25 09:46:51 compute-1 sudo[211321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:51 compute-1 python3.9[211323]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 09:46:51 compute-1 sudo[211321]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:51.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:52 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83f0001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:52 compute-1 sudo[211473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pabkbkuyzuncfylqtlehvdsedlfwrkpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064012.1109536-1827-16680898173132/AnsiballZ_modprobe.py'
Nov 25 09:46:52 compute-1 sudo[211473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:52 compute-1 python3.9[211475]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 25 09:46:52 compute-1 kernel: Key type psk registered
Nov 25 09:46:52 compute-1 ceph-mon[79643]: pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:46:52 compute-1 sudo[211473]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:52.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:52 compute-1 sudo[211638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqqrkoudjpocumuwzdnabqoheqhgzqpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064012.6660237-1851-134990587657810/AnsiballZ_stat.py'
Nov 25 09:46:52 compute-1 sudo[211638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:52 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:52 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:46:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:52 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:46:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:52 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:46:53 compute-1 python3.9[211640]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:53 compute-1 sudo[211638]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:53 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84340a7320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:53 compute-1 sudo[211762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqocqomdgcnlqumzajwelgrotojucdss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064012.6660237-1851-134990587657810/AnsiballZ_copy.py'
Nov 25 09:46:53 compute-1 sudo[211762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:53 compute-1 python3.9[211764]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764064012.6660237-1851-134990587657810/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:53 compute-1 sudo[211762]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:53 compute-1 sudo[211914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fihsekqasheljqborwyaivqjqbnxjycg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064013.7214198-1899-251445600778765/AnsiballZ_lineinfile.py'
Nov 25 09:46:53 compute-1 sudo[211914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:53.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:54 compute-1 python3.9[211916]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:54 compute-1 sudo[211914]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:54 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438041420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:54 compute-1 sudo[212066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drtqfpazrxcmloxatrfqxyxwmixxlezx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064014.2765388-1923-247243279124448/AnsiballZ_systemd.py'
Nov 25 09:46:54 compute-1 sudo[212066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:54 compute-1 ceph-mon[79643]: pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:46:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:46:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:54.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:46:54 compute-1 python3.9[212068]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:46:54 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 09:46:54 compute-1 systemd[1]: Stopped Load Kernel Modules.
Nov 25 09:46:54 compute-1 systemd[1]: Stopping Load Kernel Modules...
Nov 25 09:46:54 compute-1 systemd[1]: Starting Load Kernel Modules...
Nov 25 09:46:54 compute-1 systemd[1]: Finished Load Kernel Modules.
Nov 25 09:46:54 compute-1 sudo[212066]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:54 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438041420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:55 compute-1 sudo[212223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wszdhybtpdwqqujzkrkpulyjozdvnsel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064015.0243988-1947-5602696311739/AnsiballZ_dnf.py'
Nov 25 09:46:55 compute-1 sudo[212223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:55 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:55 compute-1 python3.9[212225]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:46:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:55 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:46:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:55.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:56 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:56 compute-1 ceph-mon[79643]: pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:46:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:56.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[136915]: 25/11/2025 09:46:56 : epoch 692579c9 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f83ec0043b0 fd 38 proxy ignored for local
Nov 25 09:46:56 compute-1 kernel: ganesha.nfsd[200116]: segfault at 50 ip 00007f849c47832e sp 00007f84617f9210 error 4 in libntirpc.so.5.8[7f849c45d000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 25 09:46:56 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:46:56 compute-1 systemd[1]: Started Process Core Dump (PID 212230/UID 0).
Nov 25 09:46:57 compute-1 systemd[1]: Reloading.
Nov 25 09:46:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:57 compute-1 systemd-rc-local-generator[212251]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:57 compute-1 systemd-sysv-generator[212257]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:57 compute-1 systemd[1]: Reloading.
Nov 25 09:46:57 compute-1 systemd-rc-local-generator[212294]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:57 compute-1 systemd-sysv-generator[212297]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:57 compute-1 virtqemud[192683]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 09:46:57 compute-1 virtqemud[192683]: hostname: compute-1
Nov 25 09:46:57 compute-1 virtqemud[192683]: nl_recv returned with error: No buffer space available
Nov 25 09:46:57 compute-1 systemd-logind[746]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 09:46:57 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:46:57 compute-1 systemd-logind[746]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 09:46:57 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:46:57 compute-1 systemd[1]: Reloading.
Nov 25 09:46:57 compute-1 lvm[212368]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:46:57 compute-1 lvm[212368]: VG ceph_vg0 finished
Nov 25 09:46:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:57.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:58 compute-1 systemd-sysv-generator[212402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:58 compute-1 systemd-rc-local-generator[212396]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:58 compute-1 systemd-coredump[212231]: Process 136919 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 71:
                                                    #0  0x00007f849c47832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:46:58 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:46:58 compute-1 systemd[1]: systemd-coredump@4-212230-0.service: Deactivated successfully.
Nov 25 09:46:58 compute-1 systemd[1]: systemd-coredump@4-212230-0.service: Consumed 1.024s CPU time.
Nov 25 09:46:58 compute-1 podman[212546]: 2025-11-25 09:46:58.275350043 +0000 UTC m=+0.021782123 container died c92c0080df66c8a94af267e8902ece696b7263f84a0a745db5bcfad4e78adbbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 25 09:46:58 compute-1 systemd[1]: var-lib-containers-storage-overlay-fd25132732b0c2e8e7734be381f8dc6ab3704d3a695953ef3643eeec1ffbfda5-merged.mount: Deactivated successfully.
Nov 25 09:46:58 compute-1 podman[212546]: 2025-11-25 09:46:58.298902892 +0000 UTC m=+0.045334962 container remove c92c0080df66c8a94af267e8902ece696b7263f84a0a745db5bcfad4e78adbbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:46:58 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:46:58 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Failed with result 'exit-code'.
Nov 25 09:46:58 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.267s CPU time.
Nov 25 09:46:58 compute-1 ceph-mon[79643]: pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:46:58 compute-1 sudo[212223]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:58.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:58 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:46:58 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:46:58 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.069s CPU time.
Nov 25 09:46:58 compute-1 systemd[1]: run-r22070618540e4ca7ae1429ce6f67f10f.service: Deactivated successfully.
Nov 25 09:46:59 compute-1 sudo[213739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weulwugjomfmhftacyfceaxzepjhmwjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064019.015222-1971-2272222866115/AnsiballZ_systemd_service.py'
Nov 25 09:46:59 compute-1 sudo[213739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:59 compute-1 python3.9[213741]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:46:59 compute-1 systemd[1]: Stopping Open-iSCSI...
Nov 25 09:46:59 compute-1 iscsid[201915]: iscsid shutting down.
Nov 25 09:46:59 compute-1 systemd[1]: iscsid.service: Deactivated successfully.
Nov 25 09:46:59 compute-1 systemd[1]: Stopped Open-iSCSI.
Nov 25 09:46:59 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 09:46:59 compute-1 systemd[1]: Starting Open-iSCSI...
Nov 25 09:46:59 compute-1 systemd[1]: Started Open-iSCSI.
Nov 25 09:46:59 compute-1 sudo[213739]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:46:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:59.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:00 compute-1 python3.9[213895]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:47:00 compute-1 ceph-mon[79643]: pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:47:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:47:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:00.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:00 compute-1 sudo[213924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:47:00 compute-1 sudo[213924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:00 compute-1 sudo[213924]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:00 compute-1 sudo[214074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnamhqbxccpjugzkkdhnodfsuqzdxcwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064020.5836575-2023-53995638071191/AnsiballZ_file.py'
Nov 25 09:47:00 compute-1 sudo[214074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:00 compute-1 python3.9[214076]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:00 compute-1 sudo[214074]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:01 compute-1 sudo[214227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmkemyjcatxtfdgwmfqwkbcypcvraxzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064021.4115999-2056-57918697397951/AnsiballZ_systemd_service.py'
Nov 25 09:47:01 compute-1 sudo[214227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:01 compute-1 python3.9[214229]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:47:01 compute-1 systemd[1]: Reloading.
Nov 25 09:47:01 compute-1 systemd-sysv-generator[214253]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:47:01 compute-1 systemd-rc-local-generator[214250]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:47:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:01.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:02 compute-1 sudo[214227]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094702 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:47:02 compute-1 ceph-mon[79643]: pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:47:02 compute-1 python3.9[214413]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:47:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:02.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:02 compute-1 network[214430]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:47:02 compute-1 network[214431]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:47:02 compute-1 network[214432]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:47:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094702 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:47:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:03.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:04 compute-1 ceph-mon[79643]: pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 25 09:47:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:04.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:47:04.994 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:47:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:47:04.994 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:47:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:47:04.994 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:47:05 compute-1 sudo[214707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhgezppyqbrrydowiedzwwdtjciqesns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064025.0511334-2113-189818855910494/AnsiballZ_systemd_service.py'
Nov 25 09:47:05 compute-1 sudo[214707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:05 compute-1 python3.9[214709]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:05 compute-1 sudo[214707]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:05 compute-1 sudo[214860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkhrzsbmvusjynflenvmnbszdehrhkdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064025.6447365-2113-123623944739570/AnsiballZ_systemd_service.py'
Nov 25 09:47:05 compute-1 sudo[214860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:05.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:06 compute-1 python3.9[214862]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:06 compute-1 sudo[214860]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:06 compute-1 sudo[215013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcitttrpesngkwwrwpnitdnbsbijisuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064026.1922324-2113-30801526737604/AnsiballZ_systemd_service.py'
Nov 25 09:47:06 compute-1 sudo[215013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:06 compute-1 ceph-mon[79643]: pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 25 09:47:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:06.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:06 compute-1 python3.9[215015]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:06 compute-1 sudo[215013]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:06 compute-1 sudo[215166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgfkrmebpdguiledjocadwgpcfixryxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064026.7510605-2113-222647539369315/AnsiballZ_systemd_service.py'
Nov 25 09:47:06 compute-1 sudo[215166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:07 compute-1 python3.9[215168]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:07 compute-1 sudo[215166]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:07 compute-1 sudo[215320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtnwicgocnnshvntduhxzvlqphgxkkbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064027.3104377-2113-49305458216214/AnsiballZ_systemd_service.py'
Nov 25 09:47:07 compute-1 sudo[215320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:07 compute-1 python3.9[215322]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:07 compute-1 sudo[215320]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:07.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:08 compute-1 sudo[215473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwsxhfigruxokelwjngitafeoojgvqqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064027.8671098-2113-27619002221955/AnsiballZ_systemd_service.py'
Nov 25 09:47:08 compute-1 sudo[215473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:08 compute-1 python3.9[215475]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:08 compute-1 sudo[215473]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:08 compute-1 podman[215477]: 2025-11-25 09:47:08.386042678 +0000 UTC m=+0.040939107 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Nov 25 09:47:08 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Scheduled restart job, restart counter is at 5.
Nov 25 09:47:08 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:47:08 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.267s CPU time.
Nov 25 09:47:08 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:47:08 compute-1 ceph-mon[79643]: pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Nov 25 09:47:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:08.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:08 compute-1 podman[215649]: 2025-11-25 09:47:08.634451974 +0000 UTC m=+0.031321751 container create 8f85fcb1f978443303d56bdbb25b390182231b6d74e38d1ca35f0618383baf9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:47:08 compute-1 sudo[215689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aypiikulnckqlkpqyjoxdppeemifqrla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064028.4555995-2113-48426295901229/AnsiballZ_systemd_service.py'
Nov 25 09:47:08 compute-1 sudo[215689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d2d5b51d2d37ea7bef235f8586bbe5b2fb9723f66fe4a15266ab43661cde28/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d2d5b51d2d37ea7bef235f8586bbe5b2fb9723f66fe4a15266ab43661cde28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d2d5b51d2d37ea7bef235f8586bbe5b2fb9723f66fe4a15266ab43661cde28/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d2d5b51d2d37ea7bef235f8586bbe5b2fb9723f66fe4a15266ab43661cde28/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.yfzsxe-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:08 compute-1 podman[215649]: 2025-11-25 09:47:08.679293396 +0000 UTC m=+0.076163203 container init 8f85fcb1f978443303d56bdbb25b390182231b6d74e38d1ca35f0618383baf9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:47:08 compute-1 podman[215649]: 2025-11-25 09:47:08.683170863 +0000 UTC m=+0.080040641 container start 8f85fcb1f978443303d56bdbb25b390182231b6d74e38d1ca35f0618383baf9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 09:47:08 compute-1 bash[215649]: 8f85fcb1f978443303d56bdbb25b390182231b6d74e38d1ca35f0618383baf9d
Nov 25 09:47:08 compute-1 podman[215649]: 2025-11-25 09:47:08.62286137 +0000 UTC m=+0.019731167 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:47:08 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:47:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:08 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:47:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:08 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:47:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:08 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:47:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:08 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:47:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:08 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:47:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:08 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:47:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:08 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:47:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:08 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:47:08 compute-1 python3.9[215695]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:08 compute-1 sudo[215689]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:09 compute-1 sudo[215887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czekxrykzjosalxfnevmwhivitxxzpwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064029.0337656-2113-37492331882928/AnsiballZ_systemd_service.py'
Nov 25 09:47:09 compute-1 sudo[215887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:09 compute-1 python3.9[215889]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:09 compute-1 sudo[215887]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:09.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:10 compute-1 sudo[216040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-einhlcetmukjrlhyekykshvdchtsykga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064029.9324923-2290-68844452206602/AnsiballZ_file.py'
Nov 25 09:47:10 compute-1 sudo[216040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:10 compute-1 python3.9[216042]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:10 compute-1 sudo[216040]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:10 compute-1 ceph-mon[79643]: pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:47:10 compute-1 sudo[216192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzobrejfggjkmbxbmeajiwanznpdheil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064030.3565464-2290-188807744818465/AnsiballZ_file.py'
Nov 25 09:47:10 compute-1 sudo[216192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:10.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:10 compute-1 python3.9[216194]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:10 compute-1 sudo[216192]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:10 compute-1 sudo[216344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwdjmgufqyrzsajylnfuumrjrxqorxiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064030.785758-2290-43619318537156/AnsiballZ_file.py'
Nov 25 09:47:10 compute-1 sudo[216344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:11 compute-1 python3.9[216346]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:11 compute-1 sudo[216344]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:11 compute-1 sudo[216497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhmpapwlecdrupceanyytsfyywechbhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064031.2065508-2290-239580379594212/AnsiballZ_file.py'
Nov 25 09:47:11 compute-1 sudo[216497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:11 compute-1 python3.9[216499]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:11 compute-1 sudo[216497]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:11 compute-1 sudo[216649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxmvgdzthnshazzgiwijuyhixrugsrwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064031.6211433-2290-176612914068388/AnsiballZ_file.py'
Nov 25 09:47:11 compute-1 sudo[216649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:11 compute-1 python3.9[216651]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:11 compute-1 sudo[216649]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:11.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:12 compute-1 sudo[216801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opyvytooptwmcjchoxcsgwzpsknpknwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064032.0324929-2290-46695007971237/AnsiballZ_file.py'
Nov 25 09:47:12 compute-1 sudo[216801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:12 compute-1 python3.9[216803]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:12 compute-1 sudo[216801]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:12 compute-1 ceph-mon[79643]: pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:12.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:12 compute-1 sudo[216953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgxlepvfcdjqfpipczxgpvccqckuspll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064032.4335277-2290-205648659213015/AnsiballZ_file.py'
Nov 25 09:47:12 compute-1 sudo[216953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:12 compute-1 python3.9[216955]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:12 compute-1 sudo[216953]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:13 compute-1 sudo[217105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hllncxvukklvcnqisvolllprvvzmadlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064032.8741856-2290-130085129853223/AnsiballZ_file.py'
Nov 25 09:47:13 compute-1 sudo[217105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:13 compute-1 python3.9[217107]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:13 compute-1 sudo[217105]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:13 compute-1 sudo[217258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjircrsomkjajzfsiawkafficzgkcumt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064033.6305072-2461-159250047255598/AnsiballZ_file.py'
Nov 25 09:47:13 compute-1 sudo[217258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:13 compute-1 python3.9[217260]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:13 compute-1 sudo[217258]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:13.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:14 compute-1 sudo[217410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uprunauidtcvbyqauwznflpsmwwpiukx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064034.0764947-2461-210233393559783/AnsiballZ_file.py'
Nov 25 09:47:14 compute-1 sudo[217410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:14 compute-1 python3.9[217412]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:14 compute-1 sudo[217410]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:14 compute-1 ceph-mon[79643]: pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:14.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:14 compute-1 sudo[217562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qohdyklfrasovrtrzccxxkvghcbxpwxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064034.5323036-2461-100232132004389/AnsiballZ_file.py'
Nov 25 09:47:14 compute-1 sudo[217562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:14 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:47:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:14 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:47:14 compute-1 python3.9[217564]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:14 compute-1 sudo[217562]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:15 compute-1 sudo[217714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cymuadimbygehqglkuqqnafcuwfugfsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064034.9810593-2461-91741301743415/AnsiballZ_file.py'
Nov 25 09:47:15 compute-1 sudo[217714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:15 compute-1 python3.9[217716]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:15 compute-1 sudo[217714]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:47:15 compute-1 sudo[217879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoygteaxnntxbqxphlddugsidmusvayw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064035.4227247-2461-29227818764967/AnsiballZ_file.py'
Nov 25 09:47:15 compute-1 sudo[217879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:15 compute-1 podman[217841]: 2025-11-25 09:47:15.631733406 +0000 UTC m=+0.057170679 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:47:15 compute-1 python3.9[217886]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:15 compute-1 sudo[217879]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:15.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:16 compute-1 sudo[218042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oojvhavppzybztxfmxnspmudpjhrjbmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064035.891847-2461-183942919938807/AnsiballZ_file.py'
Nov 25 09:47:16 compute-1 sudo[218042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:16 compute-1 python3.9[218044]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:16 compute-1 sudo[218042]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:16 compute-1 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 25 09:47:16 compute-1 sudo[218195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idqvpnqgyazanntpnwmdvihqqtdlddxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064036.3372757-2461-78114528371040/AnsiballZ_file.py'
Nov 25 09:47:16 compute-1 sudo[218195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:16 compute-1 ceph-mon[79643]: pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:16.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:16 compute-1 python3.9[218197]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:16 compute-1 sudo[218195]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:16 compute-1 sudo[218347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feznpztmpdffhgxvkdgxubanzrrkcbkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064036.7807767-2461-273290612912669/AnsiballZ_file.py'
Nov 25 09:47:16 compute-1 sudo[218347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:17 compute-1 python3.9[218349]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:17 compute-1 sudo[218347]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:17 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 09:47:17 compute-1 sudo[218501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqbeuzqlmfjkjcfibmsfmiqnycfpianp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064037.403362-2635-252447666503617/AnsiballZ_command.py'
Nov 25 09:47:17 compute-1 sudo[218501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:17 compute-1 python3.9[218503]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:17 compute-1 sudo[218501]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:17.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:18 compute-1 python3.9[218655]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 09:47:18 compute-1 ceph-mon[79643]: pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:18.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:18 compute-1 sudo[218805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prhjrtydodgzmddojsdszpipfazmxnmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064038.675474-2689-171869372981430/AnsiballZ_systemd_service.py'
Nov 25 09:47:18 compute-1 sudo[218805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:19 compute-1 python3.9[218807]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:47:19 compute-1 systemd[1]: Reloading.
Nov 25 09:47:19 compute-1 systemd-rc-local-generator[218834]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:47:19 compute-1 systemd-sysv-generator[218838]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:47:19 compute-1 sudo[218805]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:19 compute-1 sudo[218993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thhenmazyugpbqupfueezdlctypxphft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064039.5718565-2713-235318132426341/AnsiballZ_command.py'
Nov 25 09:47:19 compute-1 sudo[218993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:19 compute-1 python3.9[218995]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:19 compute-1 sudo[218993]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:19.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:20 compute-1 sudo[219146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otyvqdsuhvbnwyiacwadqtllsipcsfaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064040.025035-2713-119136614087197/AnsiballZ_command.py'
Nov 25 09:47:20 compute-1 sudo[219146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:20 compute-1 python3.9[219148]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:20 compute-1 sudo[219146]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:20 compute-1 ceph-mon[79643]: pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:20.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:20 compute-1 sudo[219274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:47:20 compute-1 sudo[219274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:20 compute-1 sudo[219274]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:20 compute-1 sudo[219337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjixzzgoqwckdnmnrhxesdmgxhrvlzoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064040.4670708-2713-148448841065671/AnsiballZ_command.py'
Nov 25 09:47:20 compute-1 sudo[219337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:20 compute-1 podman[219273]: 2025-11-25 09:47:20.664761421 +0000 UTC m=+0.048253882 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:47:20 compute-1 python3.9[219342]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:20 compute-1 sudo[219337]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5748000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:21 compute-1 sudo[219509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clhgiflivfkxlxukjrkmdbmtsrmvpnln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064040.9304597-2713-259300781472063/AnsiballZ_command.py'
Nov 25 09:47:21 compute-1 sudo[219509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:21 compute-1 python3.9[219511]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:21 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c001ea0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:21 compute-1 sudo[219509]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:21 compute-1 sudo[219613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:47:21 compute-1 sudo[219613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:21 compute-1 sudo[219613]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:21 compute-1 sudo[219662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:47:21 compute-1 sudo[219662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:21 compute-1 sudo[219712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nquefjshfrmhukqjnggopmekvlaiebde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064041.363464-2713-269068801256294/AnsiballZ_command.py'
Nov 25 09:47:21 compute-1 sudo[219712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:21 compute-1 python3.9[219715]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:21 compute-1 sudo[219712]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:21 compute-1 podman[219849]: 2025-11-25 09:47:21.919027394 +0000 UTC m=+0.042117509 container exec 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 25 09:47:21 compute-1 sudo[219939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okptxrosqlwjmkaxelkumqhuuivwbues ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064041.8197963-2713-239207750587654/AnsiballZ_command.py'
Nov 25 09:47:21 compute-1 sudo[219939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:22.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:22 compute-1 podman[219849]: 2025-11-25 09:47:22.002647929 +0000 UTC m=+0.125738024 container exec_died 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:47:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:22 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5748001df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:22 compute-1 python3.9[219941]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:22 compute-1 sudo[219939]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:22 compute-1 podman[220101]: 2025-11-25 09:47:22.371386275 +0000 UTC m=+0.038802951 container exec 48c3be01eb68c77d87f12f950cadd5a9f0be42049d86ff37bececa6f3d988615 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:47:22 compute-1 podman[220101]: 2025-11-25 09:47:22.379596168 +0000 UTC m=+0.047012834 container exec_died 48c3be01eb68c77d87f12f950cadd5a9f0be42049d86ff37bececa6f3d988615 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:47:22 compute-1 sudo[220245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whjayxqypdxvcppoboifpeirravehgen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064042.3014948-2713-105186617187102/AnsiballZ_command.py'
Nov 25 09:47:22 compute-1 sudo[220245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:22 compute-1 podman[220248]: 2025-11-25 09:47:22.539091442 +0000 UTC m=+0.034012973 container exec 8f85fcb1f978443303d56bdbb25b390182231b6d74e38d1ca35f0618383baf9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:47:22 compute-1 podman[220248]: 2025-11-25 09:47:22.550587327 +0000 UTC m=+0.045508858 container exec_died 8f85fcb1f978443303d56bdbb25b390182231b6d74e38d1ca35f0618383baf9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 25 09:47:22 compute-1 ceph-mon[79643]: pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:47:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:22.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:22 compute-1 python3.9[220250]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:22 compute-1 sudo[220245]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:22 compute-1 podman[220300]: 2025-11-25 09:47:22.69726205 +0000 UTC m=+0.034726559 container exec 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 09:47:22 compute-1 podman[220300]: 2025-11-25 09:47:22.707638385 +0000 UTC m=+0.045102894 container exec_died 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 09:47:22 compute-1 podman[220411]: 2025-11-25 09:47:22.851642033 +0000 UTC m=+0.034638112 container exec 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, name=keepalived, description=keepalived for Ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived)
Nov 25 09:47:22 compute-1 podman[220457]: 2025-11-25 09:47:22.916487695 +0000 UTC m=+0.050161638 container exec_died 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, version=2.2.4, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 25 09:47:22 compute-1 podman[220411]: 2025-11-25 09:47:22.920098871 +0000 UTC m=+0.103094970 container exec_died 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, vcs-type=git, version=2.2.4, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 25 09:47:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094722 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:47:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:22 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c001ea0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:22 compute-1 sudo[219662]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:22 compute-1 sudo[220527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjbfdeqqcokrtqqojwzcvybtwjvlxjtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064042.7879536-2713-106384941235105/AnsiballZ_command.py'
Nov 25 09:47:22 compute-1 sudo[220527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:23 compute-1 sudo[220530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:47:23 compute-1 sudo[220530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:23 compute-1 sudo[220530]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:23 compute-1 sudo[220555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:47:23 compute-1 sudo[220555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:23 compute-1 python3.9[220529]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:23 compute-1 sudo[220527]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:23 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5748001df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:23 compute-1 sudo[220555]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:23 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:23 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:23 compute-1 ceph-mon[79643]: pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:23 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:47:23 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:47:23 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:23 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:23 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:47:23 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:47:23 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:47:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:24.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:24 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5748001df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:24 compute-1 sudo[220760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmhkwwmtqttifqnvgynftxnzqgqwdqnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064044.3128495-2920-117726907052056/AnsiballZ_file.py'
Nov 25 09:47:24 compute-1 sudo[220760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:24.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:24 compute-1 python3.9[220762]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:24 compute-1 sudo[220760]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:24 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:24 compute-1 sudo[220912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktecjtmmqldpzhhegrrnuvdgcwugtrhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064044.7697065-2920-29018376045092/AnsiballZ_file.py'
Nov 25 09:47:24 compute-1 sudo[220912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:25 compute-1 python3.9[220914]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:25 compute-1 sudo[220912]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:25 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57480094f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:25 compute-1 sudo[221065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adsbdvqithdvjvvvafctyvqxdcimxcvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064045.2722476-2920-61365843121211/AnsiballZ_file.py'
Nov 25 09:47:25 compute-1 sudo[221065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:25 compute-1 python3.9[221067]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:25 compute-1 sudo[221065]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:25 compute-1 sudo[221217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zugzpuedolbsxsghznsmjibnchzycpus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064045.7999048-2986-133201769688931/AnsiballZ_file.py'
Nov 25 09:47:25 compute-1 sudo[221217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:26.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:26 compute-1 python3.9[221219]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:26 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c002db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:26 compute-1 sudo[221217]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:26 compute-1 ceph-mon[79643]: pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:26 compute-1 sudo[221369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuizjnuvzrktrmopykknmyveqnnwgtay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064046.2591598-2986-210757176621705/AnsiballZ_file.py'
Nov 25 09:47:26 compute-1 sudo[221369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:26 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 09:47:26 compute-1 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 25 09:47:26 compute-1 python3.9[221371]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:26.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:26 compute-1 sudo[221369]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:26 compute-1 sudo[221374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:47:26 compute-1 sudo[221374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:26 compute-1 sudo[221374]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:26 compute-1 sudo[221548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwpiyqmjbeonxvqxfmyotnbkpxrrvjpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064046.7208304-2986-183262559932437/AnsiballZ_file.py'
Nov 25 09:47:26 compute-1 sudo[221548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:26 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5748009690 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:27 compute-1 python3.9[221550]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:27 compute-1 sudo[221548]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:27 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57440025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:27 compute-1 sudo[221701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeggijwjtjxmsqwdujryxfgvfgtrcdzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064047.1738563-2986-232214634481224/AnsiballZ_file.py'
Nov 25 09:47:27 compute-1 sudo[221701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:27 compute-1 python3.9[221703]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:27 compute-1 sudo[221701]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:27 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:27 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:27 compute-1 sudo[221853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pilaweqbbsrjcyyaaqsfeckzerumsbbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064047.6196463-2986-200712986298630/AnsiballZ_file.py'
Nov 25 09:47:27 compute-1 sudo[221853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:27 compute-1 python3.9[221855]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:27 compute-1 sudo[221853]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:28.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:28 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5748009690 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:28 compute-1 sudo[222005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypezwszkkqlnheqvhnfhuylwptmqjcxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064048.0725746-2986-111187520741434/AnsiballZ_file.py'
Nov 25 09:47:28 compute-1 sudo[222005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094728 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:47:28 compute-1 python3.9[222007]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:28 compute-1 sudo[222005]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:28 compute-1 ceph-mon[79643]: pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:28.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:28 compute-1 sudo[222157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjegywfmsgchhhxmvwrahexuvjdbslbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064048.5207508-2986-234850709571767/AnsiballZ_file.py'
Nov 25 09:47:28 compute-1 sudo[222157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:28 compute-1 python3.9[222159]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:28 compute-1 sudo[222157]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:28 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c002db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:29 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800a7c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:30.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:30 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57440025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:30 compute-1 ceph-mon[79643]: pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:47:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:30.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:30 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800a7c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:31 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c002db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:32.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:32 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800b4d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:32 compute-1 ceph-mon[79643]: pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:32.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:32 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57440032d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:33 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800b4d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:33 compute-1 sudo[222312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caghmjxcqsiyfdttzyynuvlmuutvyyug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064052.9697077-3311-51678317618776/AnsiballZ_getent.py'
Nov 25 09:47:33 compute-1 sudo[222312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:33 compute-1 python3.9[222314]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 25 09:47:33 compute-1 sudo[222312]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:33 compute-1 sudo[222465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jemaboypbjpbcsnckpehudcylthyusqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064053.6296604-3335-231300379958689/AnsiballZ_group.py'
Nov 25 09:47:33 compute-1 sudo[222465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:34.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:34 compute-1 python3.9[222467]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 09:47:34 compute-1 groupadd[222468]: group added to /etc/group: name=nova, GID=42436
Nov 25 09:47:34 compute-1 rsyslogd[963]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:47:34 compute-1 rsyslogd[963]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:47:34 compute-1 groupadd[222468]: group added to /etc/gshadow: name=nova
Nov 25 09:47:34 compute-1 groupadd[222468]: new group: name=nova, GID=42436
Nov 25 09:47:34 compute-1 sudo[222465]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:34 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800b4d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:34 compute-1 ceph-mon[79643]: pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:47:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:34.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:34 compute-1 sudo[222624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfyinspuhdndhncqfvkfokpbelmumvcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064054.3184311-3359-133634836434096/AnsiballZ_user.py'
Nov 25 09:47:34 compute-1 sudo[222624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:34 compute-1 python3.9[222626]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 09:47:34 compute-1 useradd[222628]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 25 09:47:34 compute-1 useradd[222628]: add 'nova' to group 'libvirt'
Nov 25 09:47:34 compute-1 useradd[222628]: add 'nova' to shadow group 'libvirt'
Nov 25 09:47:34 compute-1 sudo[222624]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:34 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800b4d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:35 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57440032d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:35 compute-1 sshd-session[222660]: Accepted publickey for zuul from 192.168.122.30 port 44726 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:47:35 compute-1 systemd-logind[746]: New session 52 of user zuul.
Nov 25 09:47:35 compute-1 systemd[1]: Started Session 52 of User zuul.
Nov 25 09:47:35 compute-1 sshd-session[222660]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:47:35 compute-1 sshd-session[222663]: Received disconnect from 192.168.122.30 port 44726:11: disconnected by user
Nov 25 09:47:35 compute-1 sshd-session[222663]: Disconnected from user zuul 192.168.122.30 port 44726
Nov 25 09:47:35 compute-1 sshd-session[222660]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:47:35 compute-1 systemd[1]: session-52.scope: Deactivated successfully.
Nov 25 09:47:35 compute-1 systemd-logind[746]: Session 52 logged out. Waiting for processes to exit.
Nov 25 09:47:35 compute-1 systemd-logind[746]: Removed session 52.
Nov 25 09:47:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:35 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:47:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:36.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:36 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:36 compute-1 python3.9[222813]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:36.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:36 compute-1 ceph-mon[79643]: pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:47:36 compute-1 python3.9[222934]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064055.9982533-3434-205826572195945/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:36 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:37 compute-1 python3.9[223084]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:37 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:37 compute-1 python3.9[223161]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:37 compute-1 python3.9[223311]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:38.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:38 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:38 compute-1 python3.9[223432]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064057.5275302-3434-242601748011101/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:38.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:38 compute-1 ceph-mon[79643]: pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:47:38 compute-1 podman[223556]: 2025-11-25 09:47:38.659303753 +0000 UTC m=+0.075711511 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 09:47:38 compute-1 python3.9[223593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:38 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:47:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:38 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:47:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:38 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:39 compute-1 python3.9[223720]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064058.3877301-3434-44564515180330/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:39 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:39 compute-1 python3.9[223871]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:39 compute-1 python3.9[223992]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064059.2611475-3434-30584047506240/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:40.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:40 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:40 compute-1 python3.9[224142]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:40.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:40 compute-1 ceph-mon[79643]: pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:47:40 compute-1 sudo[224264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:47:40 compute-1 sudo[224264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:40 compute-1 sudo[224264]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:40 compute-1 python3.9[224263]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064060.0818286-3434-48108225013773/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:40 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:41 compute-1 sudo[224439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpztjyzakbizxoicirxyjfrrykiqbcog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064061.000798-3683-258372020249502/AnsiballZ_file.py'
Nov 25 09:47:41 compute-1 sudo[224439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:41 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:41 compute-1 python3.9[224441]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:41 compute-1 sudo[224439]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:41 compute-1 sudo[224591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtwkgvrbyvxiiqwejgxsppbssbxloqxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064061.5628982-3707-6050925396768/AnsiballZ_copy.py'
Nov 25 09:47:41 compute-1 sudo[224591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:41 compute-1 python3.9[224593]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:41 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:47:41 compute-1 sudo[224591]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:42.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:42 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:42 compute-1 sudo[224743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmzrxkoatndbcfpgmcwawlkubwcmafsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064062.113106-3731-177450206659625/AnsiballZ_stat.py'
Nov 25 09:47:42 compute-1 sudo[224743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:42 compute-1 python3.9[224745]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:47:42 compute-1 sudo[224743]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:42.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:42 compute-1 ceph-mon[79643]: pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:42 compute-1 sudo[224895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxzjnanurcsoyzcmuxbjuiytytbadtkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064062.6552026-3755-99180566106202/AnsiballZ_stat.py'
Nov 25 09:47:42 compute-1 sudo[224895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:42 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5750001080 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:43 compute-1 python3.9[224897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:43 compute-1 sudo[224895]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:43 compute-1 sudo[225019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkevhzxufqzjkslkfmpoqumdgpbovbnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064062.6552026-3755-99180566106202/AnsiballZ_copy.py'
Nov 25 09:47:43 compute-1 sudo[225019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:43 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:43 compute-1 python3.9[225021]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764064062.6552026-3755-99180566106202/.source _original_basename=.59vhgy5p follow=False checksum=7287bb5d3f399f8e09ddca643d926b7a998b8705 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 25 09:47:43 compute-1 sudo[225019]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:44.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:44 compute-1 python3.9[225173]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:47:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:44 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:47:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:44.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:47:44 compute-1 ceph-mon[79643]: pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:47:44 compute-1 python3.9[225325]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:44 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:45 compute-1 python3.9[225446]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064064.3583639-3833-202367615174354/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=4c77b2c041a7564aa2c84115117dc8517e9bb9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:45 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:45 compute-1 python3.9[225597]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:47:45 compute-1 podman[225645]: 2025-11-25 09:47:45.808360078 +0000 UTC m=+0.062091589 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 09:47:46 compute-1 python3.9[225741]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064065.2805374-3878-62272080208885/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=941d5739094d046b86479403aeaaf0441b82ba11 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:46.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:46 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:46 compute-1 sudo[225891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjgxzsuctrpqfrovzpyksoisvooucybu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064066.4248335-3929-131187758090560/AnsiballZ_container_config_data.py'
Nov 25 09:47:46 compute-1 sudo[225891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:46.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:46 compute-1 ceph-mon[79643]: pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:47:46 compute-1 python3.9[225893]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 25 09:47:46 compute-1 sudo[225891]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:46 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:47 compute-1 sudo[226044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsgfldaocvdzhrxhbiryxobdqdhpndpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064067.0231252-3956-85411078906219/AnsiballZ_container_config_hash.py'
Nov 25 09:47:47 compute-1 sudo[226044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:47 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:47 compute-1 python3.9[226046]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 09:47:47 compute-1 sudo[226044]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:47 compute-1 sudo[226196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvnjrqblqmmognlbxtichbwoyqdwlcld ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764064067.729637-3986-51938889180380/AnsiballZ_edpm_container_manage.py'
Nov 25 09:47:47 compute-1 sudo[226196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:48.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:48 compute-1 python3[226198]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 09:47:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:48 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5750001bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094748 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:47:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:48.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:48 compute-1 ceph-mon[79643]: pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:47:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:48 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:49 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:50.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:50 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:50.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:50 compute-1 ceph-mon[79643]: pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:47:50 compute-1 podman[226231]: 2025-11-25 09:47:50.815890242 +0000 UTC m=+0.067844904 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:47:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:50 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57500024e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:51 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:52.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:52 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:52.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:52 compute-1 ceph-mon[79643]: pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:47:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:52 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:53 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:54.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:54 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:54.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:54 compute-1 ceph-mon[79643]: pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:54 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:55 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:56.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:56 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094756 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:47:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:56.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:56 compute-1 ceph-mon[79643]: pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:56 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:57 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:58.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:58 compute-1 podman[226209]: 2025-11-25 09:47:58.160874298 +0000 UTC m=+9.971475721 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 09:47:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:58 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:58 compute-1 podman[226297]: 2025-11-25 09:47:58.260359957 +0000 UTC m=+0.031443556 container create b13a7bf4902af606f64a5d4e9479d0761ab290f6b952ae80955f682f5a44557e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:47:58 compute-1 podman[226297]: 2025-11-25 09:47:58.246104779 +0000 UTC m=+0.017188388 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 09:47:58 compute-1 python3[226198]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 25 09:47:58 compute-1 sudo[226196]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:47:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:58.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:58 compute-1 ceph-mon[79643]: pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:58 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:47:59 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:59 compute-1 ceph-mon[79643]: pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:00.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:00 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:00 compute-1 sudo[226475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfxdkmqbmvsgmorofhschxbryskqopzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064080.346094-4010-25309990839147/AnsiballZ_stat.py'
Nov 25 09:48:00 compute-1 sudo[226475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:00.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:00 compute-1 python3.9[226477]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:48:00 compute-1 sudo[226475]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:00 compute-1 sudo[226480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:48:00 compute-1 sudo[226480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:00 compute-1 sudo[226480]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:48:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:00 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:01 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:01 compute-1 sudo[226655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pimsslxbucozulmnssklmgzaclmcjduq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064081.2848291-4046-190825842589647/AnsiballZ_container_config_data.py'
Nov 25 09:48:01 compute-1 sudo[226655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:01 compute-1 ceph-mon[79643]: pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:48:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:02.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:48:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:02 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:02 compute-1 python3.9[226657]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 25 09:48:02 compute-1 sudo[226655]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:02.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:02 compute-1 sudo[226807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eelyhgyxxhghbeafuwmocwgfvqgqioku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064082.5518935-4073-217604601160367/AnsiballZ_container_config_hash.py'
Nov 25 09:48:02 compute-1 sudo[226807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:02 compute-1 python3.9[226809]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 09:48:02 compute-1 sudo[226807]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:02 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f574800c5d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:03 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:03 compute-1 sudo[226960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyejrcvttyinrkktpghgkrnjbuwpphln ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764064083.2623124-4103-161132340975876/AnsiballZ_edpm_container_manage.py'
Nov 25 09:48:03 compute-1 sudo[226960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:03 compute-1 python3[226962]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 09:48:03 compute-1 podman[226991]: 2025-11-25 09:48:03.779554005 +0000 UTC m=+0.029612375 container create a4241e07b098f44455c68f62e5484d75c0287d97efb22c1c3f0ee6bbb1a34fa6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_id=edpm, container_name=nova_compute, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:48:03 compute-1 podman[226991]: 2025-11-25 09:48:03.763996111 +0000 UTC m=+0.014054502 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 09:48:03 compute-1 python3[226962]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 kolla_start
Nov 25 09:48:03 compute-1 sudo[226960]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:04.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:04 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c0057d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:04 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:48:04 compute-1 sudo[227167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvsfdrlqlpspztheskacmatfsykldofy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064084.1363316-4127-105156307319693/AnsiballZ_stat.py'
Nov 25 09:48:04 compute-1 sudo[227167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:04 compute-1 ceph-mon[79643]: pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:04 compute-1 python3.9[227169]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:48:04 compute-1 sudo[227167]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:04.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:04 compute-1 sudo[227321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlbpvekewxqjhppgjenpmbokamiurvmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064084.80354-4154-219867162405002/AnsiballZ_file.py'
Nov 25 09:48:04 compute-1 sudo[227321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:04 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c0057d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:48:04.994 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:48:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:48:04.994 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:48:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:48:04.995 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:48:05 compute-1 python3.9[227323]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:48:05 compute-1 sudo[227321]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:05 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:05 compute-1 sudo[227476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnshwbvopjiqyqliswodbibuqecendug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064085.1912029-4154-156311953801696/AnsiballZ_copy.py'
Nov 25 09:48:05 compute-1 sudo[227476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:05 compute-1 python3.9[227478]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764064085.1912029-4154-156311953801696/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:48:05 compute-1 sudo[227476]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:05 compute-1 sudo[227552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrdlunpbupykjltrwkvectilqbpiwdlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064085.1912029-4154-156311953801696/AnsiballZ_systemd.py'
Nov 25 09:48:05 compute-1 sudo[227552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:06 compute-1 python3.9[227554]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:48:06 compute-1 systemd[1]: Reloading.
Nov 25 09:48:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:06.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:06 compute-1 systemd-rc-local-generator[227574]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:48:06 compute-1 systemd-sysv-generator[227578]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:48:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:06 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:06 compute-1 sudo[227552]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:06 compute-1 ceph-mon[79643]: pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:06 compute-1 sudo[227662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujwazhajhowwkdywrvssnmdkryqdihag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064085.1912029-4154-156311953801696/AnsiballZ_systemd.py'
Nov 25 09:48:06 compute-1 sudo[227662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:06.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:06 compute-1 python3.9[227664]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:48:06 compute-1 systemd[1]: Reloading.
Nov 25 09:48:06 compute-1 systemd-rc-local-generator[227686]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:48:06 compute-1 systemd-sysv-generator[227690]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:48:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:06 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57680bf270 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:07 compute-1 systemd[1]: Starting nova_compute container...
Nov 25 09:48:07 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:48:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5af8bbffcc097cd2c8d3bdfcea1febeec3bf7c49d5c694ce04d171c1a54b770/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5af8bbffcc097cd2c8d3bdfcea1febeec3bf7c49d5c694ce04d171c1a54b770/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5af8bbffcc097cd2c8d3bdfcea1febeec3bf7c49d5c694ce04d171c1a54b770/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5af8bbffcc097cd2c8d3bdfcea1febeec3bf7c49d5c694ce04d171c1a54b770/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5af8bbffcc097cd2c8d3bdfcea1febeec3bf7c49d5c694ce04d171c1a54b770/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:07 compute-1 podman[227704]: 2025-11-25 09:48:07.109832541 +0000 UTC m=+0.077282682 container init a4241e07b098f44455c68f62e5484d75c0287d97efb22c1c3f0ee6bbb1a34fa6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:48:07 compute-1 podman[227704]: 2025-11-25 09:48:07.114838597 +0000 UTC m=+0.082288737 container start a4241e07b098f44455c68f62e5484d75c0287d97efb22c1c3f0ee6bbb1a34fa6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 25 09:48:07 compute-1 podman[227704]: nova_compute
Nov 25 09:48:07 compute-1 nova_compute[227718]: + sudo -E kolla_set_configs
Nov 25 09:48:07 compute-1 systemd[1]: Started nova_compute container.
Nov 25 09:48:07 compute-1 sudo[227662]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Validating config file
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Copying service configuration files
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Deleting /etc/ceph
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Creating directory /etc/ceph
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Writing out command to execute
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:07 compute-1 nova_compute[227718]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 09:48:07 compute-1 nova_compute[227718]: ++ cat /run_command
Nov 25 09:48:07 compute-1 nova_compute[227718]: + CMD=nova-compute
Nov 25 09:48:07 compute-1 nova_compute[227718]: + ARGS=
Nov 25 09:48:07 compute-1 nova_compute[227718]: + sudo kolla_copy_cacerts
Nov 25 09:48:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:07 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:48:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:07 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:48:07 compute-1 nova_compute[227718]: + [[ ! -n '' ]]
Nov 25 09:48:07 compute-1 nova_compute[227718]: + . kolla_extend_start
Nov 25 09:48:07 compute-1 nova_compute[227718]: Running command: 'nova-compute'
Nov 25 09:48:07 compute-1 nova_compute[227718]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 09:48:07 compute-1 nova_compute[227718]: + umask 0022
Nov 25 09:48:07 compute-1 nova_compute[227718]: + exec nova-compute
Nov 25 09:48:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:07 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c002600 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:08.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:08 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c002600 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:08 compute-1 ceph-mon[79643]: pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:48:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:08.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:08 compute-1 podman[227756]: 2025-11-25 09:48:08.819339242 +0000 UTC m=+0.071602045 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 25 09:48:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:08 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.018 227722 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.018 227722 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.018 227722 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.018 227722 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 25 09:48:09 compute-1 python3.9[227900]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.138 227722 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.150 227722 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.150 227722 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 25 09:48:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:09 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57680bfdb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.626 227722 INFO nova.virt.driver [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.733 227722 INFO nova.compute.provider_config [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.764 227722 DEBUG oslo_concurrency.lockutils [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.765 227722 DEBUG oslo_concurrency.lockutils [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.765 227722 DEBUG oslo_concurrency.lockutils [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.765 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.765 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.765 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.766 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.766 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.766 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.766 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.766 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.766 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.766 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.767 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.767 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.767 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.767 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.767 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.767 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.767 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.768 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.768 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.768 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.768 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.768 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.768 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.769 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.769 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.769 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.769 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.769 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.769 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.770 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.770 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.770 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.770 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.770 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.770 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.770 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.771 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.771 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.771 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.771 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.771 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.771 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.772 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.772 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.772 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.772 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.772 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.772 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.772 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.773 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.773 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.773 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.773 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.773 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.773 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.773 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.774 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.774 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.774 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.774 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.774 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.774 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.774 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.775 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.775 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.775 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.775 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.775 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.775 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.775 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.775 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.776 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.776 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.776 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.776 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.776 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.776 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.776 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.777 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.777 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.777 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 python3.9[228053]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.777 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.777 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.777 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.777 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.778 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.778 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.778 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.778 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.778 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.778 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.778 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.779 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.779 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.779 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.779 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.779 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.779 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.779 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.780 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.780 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.780 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.780 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.780 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.780 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.780 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.781 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.781 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.781 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.781 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.781 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.781 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.781 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.781 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.782 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.782 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.782 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.782 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.782 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.782 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.782 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.783 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.783 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.783 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.783 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.783 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.783 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.783 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.784 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.784 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.784 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.784 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.784 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.784 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.784 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.784 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.785 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.785 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.785 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.785 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.785 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.785 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.785 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.786 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.786 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.786 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.786 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.786 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.786 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.786 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.787 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.787 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.787 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.787 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.787 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.787 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.788 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.788 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.788 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.788 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.788 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.788 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.788 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.789 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.789 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.789 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.789 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.789 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.789 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.789 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.790 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.790 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.790 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.790 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.791 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.791 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.791 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.791 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.791 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.791 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.792 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.792 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.792 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.792 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.792 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.792 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.792 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.793 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.793 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.793 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.793 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.793 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.793 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.793 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.794 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.794 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.794 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.794 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.794 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.794 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.794 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.795 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.795 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.795 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.795 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.795 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.795 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.795 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.795 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.796 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.796 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.796 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.796 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.796 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.796 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.796 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.797 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.797 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.797 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.797 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.797 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.797 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.797 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.798 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.798 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.798 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.798 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.798 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.798 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.798 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.799 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.799 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.799 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.799 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.799 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.799 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.799 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.800 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.800 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.800 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.800 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.800 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.800 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.800 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.801 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.801 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.801 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.801 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.801 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.801 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.801 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.801 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.802 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.802 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.802 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.802 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.802 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.802 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.802 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.803 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.803 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.803 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.803 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.803 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.803 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.803 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.804 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.804 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.804 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.804 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.804 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.805 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.805 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.805 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.805 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.805 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.805 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.805 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.806 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.806 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.806 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.806 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.806 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.806 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.806 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.807 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.807 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.807 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.807 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.807 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.807 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.807 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.808 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.808 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.808 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.808 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.808 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.808 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.808 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.809 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.809 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.809 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.809 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.809 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.809 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.809 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.810 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.810 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.810 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.810 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.810 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.810 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.810 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.810 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.811 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.811 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.811 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.811 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.811 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.811 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.811 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.812 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.812 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.812 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.812 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.812 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.812 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.812 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.813 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.813 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.813 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.813 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.813 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.813 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.813 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.814 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.814 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.814 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.814 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.814 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.814 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.814 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.815 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.815 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.815 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.815 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.815 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.815 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.815 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.816 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.816 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.816 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.816 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.816 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.816 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.817 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.817 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.817 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.817 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.817 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.817 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.817 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.818 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.818 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.818 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.818 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.818 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.818 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.818 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.819 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.819 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.819 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.819 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.819 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.819 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.819 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.820 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.820 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.820 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.820 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.820 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.820 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.820 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.820 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.821 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.821 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.821 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.821 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.821 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.821 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.821 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.822 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.822 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.822 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.822 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.822 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.822 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.822 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.823 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.823 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.823 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.823 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.823 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.823 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.823 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.824 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.824 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.824 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.824 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.824 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.824 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.824 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.824 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.825 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.825 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.825 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.825 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.825 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.825 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.825 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.826 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.826 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.826 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.826 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.826 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.826 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.826 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.827 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.827 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.827 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.827 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.827 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.827 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.827 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.828 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.828 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.828 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.828 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.828 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.828 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.828 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.828 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.829 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.829 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.829 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.829 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.829 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.829 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.829 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.830 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.830 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.830 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.830 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.830 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.830 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.830 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.831 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.831 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.831 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.831 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.831 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.831 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.831 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.832 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.832 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.832 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.832 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.832 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.832 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.832 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.833 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.833 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.833 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.833 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.833 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.833 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.833 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.834 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.834 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.834 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.834 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.834 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.834 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.834 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.835 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.835 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.835 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.835 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.835 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.835 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.835 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.836 227722 WARNING oslo_config.cfg [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 09:48:09 compute-1 nova_compute[227718]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 09:48:09 compute-1 nova_compute[227718]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 09:48:09 compute-1 nova_compute[227718]: and ``live_migration_inbound_addr`` respectively.
Nov 25 09:48:09 compute-1 nova_compute[227718]: ).  Its value may be silently ignored in the future.
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.836 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.836 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.836 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.836 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.836 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.837 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.837 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.837 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.837 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.837 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.837 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.837 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.838 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.838 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.838 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.838 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.838 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.839 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.839 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.rbd_secret_uuid        = af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.839 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.839 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.839 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.839 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.840 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.840 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.840 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.840 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.840 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.840 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.840 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.841 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.841 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.841 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.841 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.841 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.841 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.841 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.842 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.842 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.842 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.842 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.842 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.842 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.842 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.843 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.843 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.843 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.843 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.843 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.843 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.843 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.844 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.844 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.844 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.844 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.844 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.844 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.844 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.845 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.845 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.845 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.845 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.845 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.845 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.845 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.846 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.846 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.846 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.846 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.846 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.846 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.846 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.847 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.847 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.847 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.847 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.847 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.847 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.847 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.847 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.848 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.848 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.848 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.848 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.848 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.848 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.849 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.849 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.849 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.849 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.849 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.849 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.849 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.849 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.850 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.850 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.850 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.850 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.850 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.850 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.850 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.851 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.851 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.851 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.851 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.851 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.851 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.851 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.851 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.852 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.852 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.852 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.852 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.852 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.852 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.852 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.853 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.853 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.853 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.853 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.853 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.853 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.853 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.854 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.854 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.854 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.854 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.854 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.854 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.854 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.855 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.855 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.855 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.855 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.855 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.855 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.855 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.856 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.856 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.856 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.856 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.856 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.856 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.857 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.857 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.857 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.857 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.857 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.857 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.857 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.857 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.858 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.858 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.858 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.858 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.858 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.858 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.859 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.859 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.859 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.859 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.859 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.859 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.859 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.859 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.860 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.860 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.860 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.860 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.860 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.860 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.861 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.861 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.861 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.861 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.861 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.861 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.861 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.862 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.862 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.862 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.862 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.862 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.862 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.863 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.863 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.863 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.863 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.863 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.863 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.863 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.864 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.864 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.864 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.864 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.864 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.864 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.864 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.865 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.865 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.865 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.865 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.865 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.865 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.865 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.866 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.866 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.866 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.866 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.866 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.866 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.866 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.867 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.867 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.867 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.867 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.867 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.867 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.867 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.868 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.868 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.868 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.868 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.868 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.868 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.868 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.868 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.869 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.869 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.869 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.869 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.869 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.869 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.869 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.870 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.870 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.870 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.870 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.870 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.870 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.870 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.870 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.871 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.871 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.871 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.871 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.871 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.871 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.872 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.872 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.872 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.872 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.872 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.872 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.873 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.873 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.873 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.873 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.873 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.873 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.873 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.873 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.874 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.874 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.874 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.874 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.874 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.874 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.874 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.875 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.875 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.875 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.875 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.875 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.875 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.875 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.876 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.876 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.876 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.876 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.876 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.876 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.876 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.876 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.877 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.877 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.877 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.877 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.877 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.877 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.878 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.878 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.878 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.878 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.878 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.878 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.878 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.879 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.879 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.879 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.879 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.879 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.879 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.879 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.879 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.880 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.880 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.880 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.880 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.880 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.880 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.881 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.881 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.881 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.881 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.881 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.881 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.881 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.882 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.882 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.882 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.882 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.882 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.882 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.882 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.882 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.883 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.883 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.883 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.883 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.883 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.883 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.883 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.884 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.884 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.884 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.884 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.884 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.884 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.884 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.885 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.885 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.885 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.885 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.885 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.885 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.885 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.886 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.886 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.886 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.886 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.886 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.886 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.886 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.886 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.887 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.887 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.887 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.887 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.887 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.887 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.887 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.888 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.888 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.888 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.888 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.888 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.888 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.888 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.889 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.889 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.889 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.889 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.889 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.889 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.889 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.890 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.890 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.890 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.890 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.890 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.890 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.890 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.890 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.891 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.891 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.891 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.891 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.891 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.891 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.891 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.892 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.892 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.892 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.892 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.892 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.892 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.892 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.893 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.893 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.893 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.893 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.893 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.893 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.893 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.894 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.894 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.894 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.894 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.894 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.894 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.894 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.895 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.895 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.895 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.895 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.895 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.895 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.895 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.895 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.896 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.896 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.896 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.896 227722 DEBUG oslo_service.service [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.897 227722 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.904 227722 DEBUG nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.905 227722 DEBUG nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.905 227722 DEBUG nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.905 227722 DEBUG nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 25 09:48:09 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 09:48:09 compute-1 systemd[1]: Started libvirt QEMU daemon.
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.953 227722 DEBUG nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f28016d03d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.955 227722 DEBUG nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f28016d03d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.956 227722 INFO nova.virt.libvirt.driver [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Connection event '1' reason 'None'
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.962 227722 WARNING nova.virt.libvirt.driver [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Nov 25 09:48:09 compute-1 nova_compute[227718]: 2025-11-25 09:48:09.963 227722 DEBUG nova.virt.libvirt.volume.mount [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 25 09:48:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:10.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:10 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c005430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:10 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:48:10 compute-1 ceph-mon[79643]: pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:48:10 compute-1 python3.9[228255]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.651 227722 INFO nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Libvirt host capabilities <capabilities>
Nov 25 09:48:10 compute-1 nova_compute[227718]: 
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <host>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <uuid>3702d874-fa35-45b6-9e4f-9523fd2bec51</uuid>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <cpu>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <arch>x86_64</arch>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model>EPYC-Milan-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <vendor>AMD</vendor>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <microcode version='167776725'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <signature family='25' model='1' stepping='1'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <maxphysaddr mode='emulate' bits='48'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='x2apic'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='tsc-deadline'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='osxsave'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='hypervisor'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='tsc_adjust'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='ospke'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='vaes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='vpclmulqdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='spec-ctrl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='stibp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='arch-capabilities'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='ssbd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='cmp_legacy'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='virt-ssbd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='lbrv'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='tsc-scale'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='vmcb-clean'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='pause-filter'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='pfthreshold'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='v-vmsave-vmload'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='vgif'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='rdctl-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='skip-l1dfl-vmentry'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='mds-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature name='pschange-mc-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <pages unit='KiB' size='4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <pages unit='KiB' size='2048'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <pages unit='KiB' size='1048576'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </cpu>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <power_management>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <suspend_mem/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </power_management>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <iommu support='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <migration_features>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <live/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <uri_transports>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <uri_transport>tcp</uri_transport>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <uri_transport>rdma</uri_transport>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </uri_transports>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </migration_features>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <topology>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <cells num='1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <cell id='0'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:           <memory unit='KiB'>7865372</memory>
Nov 25 09:48:10 compute-1 nova_compute[227718]:           <pages unit='KiB' size='4'>1966343</pages>
Nov 25 09:48:10 compute-1 nova_compute[227718]:           <pages unit='KiB' size='2048'>0</pages>
Nov 25 09:48:10 compute-1 nova_compute[227718]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 25 09:48:10 compute-1 nova_compute[227718]:           <distances>
Nov 25 09:48:10 compute-1 nova_compute[227718]:             <sibling id='0' value='10'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:           </distances>
Nov 25 09:48:10 compute-1 nova_compute[227718]:           <cpus num='4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:           </cpus>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         </cell>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </cells>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </topology>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <cache>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </cache>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <secmodel>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model>selinux</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <doi>0</doi>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </secmodel>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <secmodel>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model>dac</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <doi>0</doi>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </secmodel>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </host>
Nov 25 09:48:10 compute-1 nova_compute[227718]: 
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <guest>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <os_type>hvm</os_type>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <arch name='i686'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <wordsize>32</wordsize>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <domain type='qemu'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <domain type='kvm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </arch>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <features>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <pae/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <nonpae/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <acpi default='on' toggle='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <apic default='on' toggle='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <cpuselection/>
Nov 25 09:48:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <deviceboot/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <disksnapshot default='on' toggle='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <externalSnapshot/>
Nov 25 09:48:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </features>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </guest>
Nov 25 09:48:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:10.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:10 compute-1 nova_compute[227718]: 
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <guest>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <os_type>hvm</os_type>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <arch name='x86_64'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <wordsize>64</wordsize>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <domain type='qemu'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <domain type='kvm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </arch>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <features>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <acpi default='on' toggle='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <apic default='on' toggle='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <cpuselection/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <deviceboot/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <disksnapshot default='on' toggle='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <externalSnapshot/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </features>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </guest>
Nov 25 09:48:10 compute-1 nova_compute[227718]: 
Nov 25 09:48:10 compute-1 nova_compute[227718]: </capabilities>
Nov 25 09:48:10 compute-1 nova_compute[227718]: 
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.655 227722 DEBUG nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.671 227722 DEBUG nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 25 09:48:10 compute-1 nova_compute[227718]: <domainCapabilities>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <domain>kvm</domain>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <arch>i686</arch>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <vcpu max='240'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <iothreads supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <os supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <enum name='firmware'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <loader supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>rom</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pflash</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='readonly'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>yes</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>no</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='secure'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>no</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </loader>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </os>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <cpu>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>on</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>off</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='maximumMigratable'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>on</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>off</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <vendor>AMD</vendor>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='succor'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='custom' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cooperlake'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Denverton'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Denverton-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='GraniteRapids'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10-128'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10-256'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10-512'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='KnightsMill'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SierraForest'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='athlon'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='athlon-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='core2duo'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='core2duo-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='coreduo'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='coreduo-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='n270'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='n270-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='phenom'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='phenom-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </cpu>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <memoryBacking supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <enum name='sourceType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>file</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>anonymous</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>memfd</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </memoryBacking>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <devices>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <disk supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='diskDevice'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>disk</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>cdrom</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>floppy</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>lun</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='bus'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>ide</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>fdc</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>scsi</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>usb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>sata</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </disk>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <graphics supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vnc</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>egl-headless</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>dbus</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </graphics>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <video supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='modelType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vga</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>cirrus</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>none</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>bochs</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>ramfb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </video>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <hostdev supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='mode'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>subsystem</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='startupPolicy'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>default</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>mandatory</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>requisite</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>optional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='subsysType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>usb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pci</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>scsi</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='capsType'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='pciBackend'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </hostdev>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <rng supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>random</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>egd</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>builtin</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </rng>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <filesystem supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='driverType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>path</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>handle</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtiofs</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </filesystem>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <tpm supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tpm-tis</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tpm-crb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>emulator</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>external</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendVersion'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>2.0</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </tpm>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <redirdev supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='bus'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>usb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </redirdev>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <channel supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pty</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>unix</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </channel>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <crypto supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>qemu</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>builtin</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </crypto>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <interface supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>default</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>passt</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </interface>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <panic supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>isa</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>hyperv</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </panic>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <console supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>null</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vc</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pty</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>dev</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>file</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pipe</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>stdio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>udp</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tcp</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>unix</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>qemu-vdagent</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>dbus</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </console>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </devices>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <features>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <gic supported='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <genid supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <backup supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <async-teardown supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <ps2 supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <sev supported='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <sgx supported='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <hyperv supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='features'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>relaxed</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vapic</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>spinlocks</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vpindex</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>runtime</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>synic</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>stimer</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>reset</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vendor_id</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>frequencies</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>reenlightenment</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tlbflush</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>ipi</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>avic</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>emsr_bitmap</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>xmm_input</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <defaults>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </defaults>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </hyperv>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <launchSecurity supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='sectype'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tdx</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </launchSecurity>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </features>
Nov 25 09:48:10 compute-1 nova_compute[227718]: </domainCapabilities>
Nov 25 09:48:10 compute-1 nova_compute[227718]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.674 227722 DEBUG nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 25 09:48:10 compute-1 nova_compute[227718]: <domainCapabilities>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <domain>kvm</domain>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <arch>i686</arch>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <vcpu max='4096'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <iothreads supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <os supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <enum name='firmware'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <loader supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>rom</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pflash</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='readonly'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>yes</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>no</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='secure'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>no</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </loader>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </os>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <cpu>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>on</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>off</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='maximumMigratable'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>on</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>off</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <vendor>AMD</vendor>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='succor'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='custom' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cooperlake'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Denverton'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Denverton-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='GraniteRapids'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10-128'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10-256'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10-512'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='KnightsMill'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SierraForest'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='athlon'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='athlon-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='core2duo'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='core2duo-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='coreduo'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='coreduo-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='n270'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='n270-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='phenom'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='phenom-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </cpu>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <memoryBacking supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <enum name='sourceType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>file</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>anonymous</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>memfd</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </memoryBacking>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <devices>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <disk supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='diskDevice'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>disk</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>cdrom</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>floppy</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>lun</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='bus'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>fdc</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>scsi</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>usb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>sata</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </disk>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <graphics supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vnc</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>egl-headless</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>dbus</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </graphics>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <video supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='modelType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vga</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>cirrus</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>none</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>bochs</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>ramfb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </video>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <hostdev supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='mode'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>subsystem</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='startupPolicy'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>default</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>mandatory</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>requisite</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>optional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='subsysType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>usb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pci</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>scsi</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='capsType'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='pciBackend'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </hostdev>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <rng supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>random</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>egd</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>builtin</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </rng>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <filesystem supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='driverType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>path</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>handle</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtiofs</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </filesystem>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <tpm supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tpm-tis</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tpm-crb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>emulator</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>external</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendVersion'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>2.0</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </tpm>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <redirdev supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='bus'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>usb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </redirdev>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <channel supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pty</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>unix</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </channel>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <crypto supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>qemu</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>builtin</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </crypto>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <interface supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>default</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>passt</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </interface>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <panic supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>isa</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>hyperv</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </panic>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <console supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>null</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vc</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pty</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>dev</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>file</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pipe</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>stdio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>udp</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tcp</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>unix</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>qemu-vdagent</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>dbus</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </console>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </devices>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <features>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <gic supported='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <genid supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <backup supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <async-teardown supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <ps2 supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <sev supported='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <sgx supported='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <hyperv supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='features'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>relaxed</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vapic</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>spinlocks</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vpindex</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>runtime</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>synic</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>stimer</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>reset</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vendor_id</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>frequencies</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>reenlightenment</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tlbflush</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>ipi</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>avic</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>emsr_bitmap</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>xmm_input</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <defaults>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </defaults>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </hyperv>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <launchSecurity supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='sectype'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tdx</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </launchSecurity>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </features>
Nov 25 09:48:10 compute-1 nova_compute[227718]: </domainCapabilities>
Nov 25 09:48:10 compute-1 nova_compute[227718]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.675 227722 DEBUG nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.677 227722 DEBUG nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 25 09:48:10 compute-1 nova_compute[227718]: <domainCapabilities>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <domain>kvm</domain>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <arch>x86_64</arch>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <vcpu max='240'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <iothreads supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <os supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <enum name='firmware'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <loader supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>rom</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pflash</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='readonly'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>yes</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>no</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='secure'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>no</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </loader>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </os>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <cpu>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>on</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>off</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='maximumMigratable'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>on</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>off</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <vendor>AMD</vendor>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='succor'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='custom' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cooperlake'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Denverton'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Denverton-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='GraniteRapids'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10-128'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10-256'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10-512'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='KnightsMill'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SierraForest'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='athlon'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='athlon-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='core2duo'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='core2duo-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='coreduo'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='coreduo-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='n270'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='n270-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='phenom'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='phenom-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </cpu>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <memoryBacking supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <enum name='sourceType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>file</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>anonymous</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>memfd</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </memoryBacking>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <devices>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <disk supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='diskDevice'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>disk</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>cdrom</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>floppy</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>lun</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='bus'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>ide</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>fdc</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>scsi</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>usb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>sata</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </disk>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <graphics supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vnc</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>egl-headless</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>dbus</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </graphics>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <video supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='modelType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vga</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>cirrus</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>none</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>bochs</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>ramfb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </video>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <hostdev supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='mode'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>subsystem</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='startupPolicy'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>default</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>mandatory</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>requisite</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>optional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='subsysType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>usb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pci</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>scsi</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='capsType'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='pciBackend'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </hostdev>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <rng supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>random</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>egd</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>builtin</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </rng>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <filesystem supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='driverType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>path</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>handle</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtiofs</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </filesystem>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <tpm supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tpm-tis</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tpm-crb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>emulator</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>external</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendVersion'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>2.0</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </tpm>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <redirdev supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='bus'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>usb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </redirdev>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <channel supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pty</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>unix</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </channel>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <crypto supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>qemu</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>builtin</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </crypto>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <interface supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>default</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>passt</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </interface>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <panic supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>isa</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>hyperv</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </panic>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <console supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>null</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vc</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pty</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>dev</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>file</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pipe</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>stdio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>udp</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tcp</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>unix</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>qemu-vdagent</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>dbus</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </console>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </devices>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <features>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <gic supported='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <genid supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <backup supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <async-teardown supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <ps2 supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <sev supported='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <sgx supported='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <hyperv supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='features'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>relaxed</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vapic</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>spinlocks</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vpindex</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>runtime</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>synic</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>stimer</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>reset</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vendor_id</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>frequencies</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>reenlightenment</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tlbflush</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>ipi</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>avic</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>emsr_bitmap</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>xmm_input</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <defaults>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </defaults>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </hyperv>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <launchSecurity supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='sectype'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tdx</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </launchSecurity>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </features>
Nov 25 09:48:10 compute-1 nova_compute[227718]: </domainCapabilities>
Nov 25 09:48:10 compute-1 nova_compute[227718]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.720 227722 DEBUG nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 25 09:48:10 compute-1 nova_compute[227718]: <domainCapabilities>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <domain>kvm</domain>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <arch>x86_64</arch>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <vcpu max='4096'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <iothreads supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <os supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <enum name='firmware'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>efi</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <loader supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>rom</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pflash</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='readonly'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>yes</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>no</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='secure'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>yes</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>no</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </loader>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </os>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <cpu>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>on</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>off</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='maximumMigratable'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>on</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>off</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <vendor>AMD</vendor>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='succor'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <mode name='custom' supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cooperlake'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Denverton'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Denverton-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='GraniteRapids'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10-128'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10-256'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx10-512'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Haswell-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='KnightsMill'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xop'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='la57'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SierraForest'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='hle'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='athlon'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='athlon-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='core2duo'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='core2duo-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='coreduo'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='coreduo-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='n270'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='n270-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='ss'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='phenom'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <blockers model='phenom-v1'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </blockers>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </mode>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </cpu>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <memoryBacking supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <enum name='sourceType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>file</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>anonymous</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <value>memfd</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </memoryBacking>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <devices>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <disk supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='diskDevice'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>disk</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>cdrom</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>floppy</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>lun</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='bus'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>fdc</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>scsi</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>usb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>sata</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </disk>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <graphics supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vnc</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>egl-headless</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>dbus</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </graphics>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <video supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='modelType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vga</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>cirrus</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>none</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>bochs</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>ramfb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </video>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <hostdev supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='mode'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>subsystem</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='startupPolicy'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>default</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>mandatory</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>requisite</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>optional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='subsysType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>usb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pci</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>scsi</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='capsType'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='pciBackend'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </hostdev>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <rng supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>random</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>egd</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>builtin</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </rng>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <filesystem supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='driverType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>path</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>handle</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>virtiofs</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </filesystem>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <tpm supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tpm-tis</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tpm-crb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>emulator</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>external</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendVersion'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>2.0</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </tpm>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <redirdev supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='bus'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>usb</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </redirdev>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <channel supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pty</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>unix</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </channel>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <crypto supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>qemu</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>builtin</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </crypto>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <interface supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='backendType'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>default</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>passt</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </interface>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <panic supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='model'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>isa</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>hyperv</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </panic>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <console supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='type'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>null</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vc</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pty</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>dev</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>file</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>pipe</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>stdio</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>udp</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tcp</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>unix</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>qemu-vdagent</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>dbus</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </console>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </devices>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   <features>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <gic supported='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <genid supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <backup supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <async-teardown supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <ps2 supported='yes'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <sev supported='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <sgx supported='no'/>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <hyperv supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='features'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>relaxed</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vapic</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>spinlocks</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vpindex</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>runtime</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>synic</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>stimer</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>reset</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>vendor_id</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>frequencies</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>reenlightenment</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tlbflush</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>ipi</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>avic</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>emsr_bitmap</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>xmm_input</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <defaults>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </defaults>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </hyperv>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     <launchSecurity supported='yes'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       <enum name='sectype'>
Nov 25 09:48:10 compute-1 nova_compute[227718]:         <value>tdx</value>
Nov 25 09:48:10 compute-1 nova_compute[227718]:       </enum>
Nov 25 09:48:10 compute-1 nova_compute[227718]:     </launchSecurity>
Nov 25 09:48:10 compute-1 nova_compute[227718]:   </features>
Nov 25 09:48:10 compute-1 nova_compute[227718]: </domainCapabilities>
Nov 25 09:48:10 compute-1 nova_compute[227718]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.768 227722 DEBUG nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.768 227722 DEBUG nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.768 227722 DEBUG nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.768 227722 INFO nova.virt.libvirt.host [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Secure Boot support detected
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.769 227722 INFO nova.virt.libvirt.driver [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.769 227722 INFO nova.virt.libvirt.driver [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.775 227722 DEBUG nova.virt.libvirt.driver [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.798 227722 INFO nova.virt.node [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Determined node identity 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 from /var/lib/nova/compute_id
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.830 227722 WARNING nova.compute.manager [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Compute nodes ['3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.862 227722 INFO nova.compute.manager [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.901 227722 WARNING nova.compute.manager [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.901 227722 DEBUG oslo_concurrency.lockutils [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.902 227722 DEBUG oslo_concurrency.lockutils [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.902 227722 DEBUG oslo_concurrency.lockutils [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.902 227722 DEBUG nova.compute.resource_tracker [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:48:10 compute-1 nova_compute[227718]: 2025-11-25 09:48:10.902 227722 DEBUG oslo_concurrency.processutils [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:48:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:10 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5760002b70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:11 compute-1 sudo[228437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyeufeswnhsrirgdzgjjoyqhhunvrfru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064090.8153648-4334-241346533126131/AnsiballZ_podman_container.py'
Nov 25 09:48:11 compute-1 sudo[228437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:11 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:48:11 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3877624485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:11 compute-1 nova_compute[227718]: 2025-11-25 09:48:11.257 227722 DEBUG oslo_concurrency.processutils [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:48:11 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 09:48:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:11 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:11 compute-1 systemd[1]: Started libvirt nodedev daemon.
Nov 25 09:48:11 compute-1 python3.9[228439]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 09:48:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3877624485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3343906913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:11 compute-1 rsyslogd[963]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:48:11 compute-1 rsyslogd[963]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:48:11 compute-1 sudo[228437]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:11 compute-1 nova_compute[227718]: 2025-11-25 09:48:11.611 227722 WARNING nova.virt.libvirt.driver [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:48:11 compute-1 nova_compute[227718]: 2025-11-25 09:48:11.611 227722 DEBUG nova.compute.resource_tracker [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5273MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:48:11 compute-1 nova_compute[227718]: 2025-11-25 09:48:11.612 227722 DEBUG oslo_concurrency.lockutils [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:48:11 compute-1 nova_compute[227718]: 2025-11-25 09:48:11.612 227722 DEBUG oslo_concurrency.lockutils [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:48:11 compute-1 nova_compute[227718]: 2025-11-25 09:48:11.630 227722 WARNING nova.compute.resource_tracker [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] No compute node record for compute-1.ctlplane.example.com:3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 could not be found.
Nov 25 09:48:11 compute-1 nova_compute[227718]: 2025-11-25 09:48:11.644 227722 INFO nova.compute.resource_tracker [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7
Nov 25 09:48:11 compute-1 nova_compute[227718]: 2025-11-25 09:48:11.691 227722 DEBUG nova.compute.resource_tracker [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:48:11 compute-1 nova_compute[227718]: 2025-11-25 09:48:11.691 227722 DEBUG nova.compute.resource_tracker [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:48:11 compute-1 sudo[228633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrxapixcwjnmwzjmdjemlqayjpsbzqmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064091.6710813-4358-246986976578467/AnsiballZ_systemd.py'
Nov 25 09:48:11 compute-1 sudo[228633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:12.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:12 compute-1 python3.9[228635]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:48:12 compute-1 systemd[1]: Stopping nova_compute container...
Nov 25 09:48:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:12 compute-1 nova_compute[227718]: 2025-11-25 09:48:12.188 227722 DEBUG oslo_concurrency.lockutils [None req-68858080-c8f3-46e9-a8e3-0da992fe3cc3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:48:12 compute-1 nova_compute[227718]: 2025-11-25 09:48:12.188 227722 DEBUG oslo_concurrency.lockutils [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:48:12 compute-1 nova_compute[227718]: 2025-11-25 09:48:12.189 227722 DEBUG oslo_concurrency.lockutils [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:48:12 compute-1 nova_compute[227718]: 2025-11-25 09:48:12.189 227722 DEBUG oslo_concurrency.lockutils [None req-74015238-3bb1-4d26-933f-57ca80dff4c8 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:48:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:12 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:12 compute-1 ceph-mon[79643]: pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:48:12 compute-1 virtqemud[228099]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 09:48:12 compute-1 virtqemud[228099]: hostname: compute-1
Nov 25 09:48:12 compute-1 virtqemud[228099]: End of file while reading data: Input/output error
Nov 25 09:48:12 compute-1 systemd[1]: libpod-a4241e07b098f44455c68f62e5484d75c0287d97efb22c1c3f0ee6bbb1a34fa6.scope: Deactivated successfully.
Nov 25 09:48:12 compute-1 systemd[1]: libpod-a4241e07b098f44455c68f62e5484d75c0287d97efb22c1c3f0ee6bbb1a34fa6.scope: Consumed 2.915s CPU time.
Nov 25 09:48:12 compute-1 podman[228639]: 2025-11-25 09:48:12.442328143 +0000 UTC m=+0.281785023 container died a4241e07b098f44455c68f62e5484d75c0287d97efb22c1c3f0ee6bbb1a34fa6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:48:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-b5af8bbffcc097cd2c8d3bdfcea1febeec3bf7c49d5c694ce04d171c1a54b770-merged.mount: Deactivated successfully.
Nov 25 09:48:12 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a4241e07b098f44455c68f62e5484d75c0287d97efb22c1c3f0ee6bbb1a34fa6-userdata-shm.mount: Deactivated successfully.
Nov 25 09:48:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:12.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:12 compute-1 podman[228639]: 2025-11-25 09:48:12.887803686 +0000 UTC m=+0.727260566 container cleanup a4241e07b098f44455c68f62e5484d75c0287d97efb22c1c3f0ee6bbb1a34fa6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Nov 25 09:48:12 compute-1 podman[228639]: nova_compute
Nov 25 09:48:12 compute-1 podman[228662]: nova_compute
Nov 25 09:48:12 compute-1 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 25 09:48:12 compute-1 systemd[1]: Stopped nova_compute container.
Nov 25 09:48:12 compute-1 systemd[1]: Starting nova_compute container...
Nov 25 09:48:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:12 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57680bfdb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:12 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:48:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5af8bbffcc097cd2c8d3bdfcea1febeec3bf7c49d5c694ce04d171c1a54b770/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5af8bbffcc097cd2c8d3bdfcea1febeec3bf7c49d5c694ce04d171c1a54b770/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5af8bbffcc097cd2c8d3bdfcea1febeec3bf7c49d5c694ce04d171c1a54b770/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5af8bbffcc097cd2c8d3bdfcea1febeec3bf7c49d5c694ce04d171c1a54b770/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5af8bbffcc097cd2c8d3bdfcea1febeec3bf7c49d5c694ce04d171c1a54b770/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:13 compute-1 podman[228671]: 2025-11-25 09:48:13.007236761 +0000 UTC m=+0.061750267 container init a4241e07b098f44455c68f62e5484d75c0287d97efb22c1c3f0ee6bbb1a34fa6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 09:48:13 compute-1 podman[228671]: 2025-11-25 09:48:13.012293461 +0000 UTC m=+0.066806948 container start a4241e07b098f44455c68f62e5484d75c0287d97efb22c1c3f0ee6bbb1a34fa6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 09:48:13 compute-1 podman[228671]: nova_compute
Nov 25 09:48:13 compute-1 nova_compute[228683]: + sudo -E kolla_set_configs
Nov 25 09:48:13 compute-1 systemd[1]: Started nova_compute container.
Nov 25 09:48:13 compute-1 sudo[228633]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Validating config file
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Copying service configuration files
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Deleting /etc/ceph
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Creating directory /etc/ceph
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Writing out command to execute
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:13 compute-1 nova_compute[228683]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 09:48:13 compute-1 nova_compute[228683]: ++ cat /run_command
Nov 25 09:48:13 compute-1 nova_compute[228683]: + CMD=nova-compute
Nov 25 09:48:13 compute-1 nova_compute[228683]: + ARGS=
Nov 25 09:48:13 compute-1 nova_compute[228683]: + sudo kolla_copy_cacerts
Nov 25 09:48:13 compute-1 nova_compute[228683]: + [[ ! -n '' ]]
Nov 25 09:48:13 compute-1 nova_compute[228683]: + . kolla_extend_start
Nov 25 09:48:13 compute-1 nova_compute[228683]: Running command: 'nova-compute'
Nov 25 09:48:13 compute-1 nova_compute[228683]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 09:48:13 compute-1 nova_compute[228683]: + umask 0022
Nov 25 09:48:13 compute-1 nova_compute[228683]: + exec nova-compute
Nov 25 09:48:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:13 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c0057d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:14 compute-1 sudo[228846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miozwvsdiqjjjlhgnfxjsidtmhfjhgsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064093.8600712-4385-122906749546151/AnsiballZ_podman_container.py'
Nov 25 09:48:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:14.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:14 compute-1 sudo[228846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:14 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c0057d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:14 compute-1 python3.9[228848]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 09:48:14 compute-1 systemd[1]: Started libpod-conmon-b13a7bf4902af606f64a5d4e9479d0761ab290f6b952ae80955f682f5a44557e.scope.
Nov 25 09:48:14 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:48:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128413e58a4dcc68a294ae6583febd09683968f22a05176df3f1277e14a3f198/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128413e58a4dcc68a294ae6583febd09683968f22a05176df3f1277e14a3f198/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128413e58a4dcc68a294ae6583febd09683968f22a05176df3f1277e14a3f198/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:14 compute-1 ceph-mon[79643]: pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:48:14 compute-1 podman[228867]: 2025-11-25 09:48:14.437073236 +0000 UTC m=+0.079309221 container init b13a7bf4902af606f64a5d4e9479d0761ab290f6b952ae80955f682f5a44557e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:48:14 compute-1 podman[228867]: 2025-11-25 09:48:14.443379963 +0000 UTC m=+0.085615928 container start b13a7bf4902af606f64a5d4e9479d0761ab290f6b952ae80955f682f5a44557e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.444602) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094444625, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4654, "num_deletes": 502, "total_data_size": 12816889, "memory_usage": 12976024, "flush_reason": "Manual Compaction"}
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Nov 25 09:48:14 compute-1 python3.9[228848]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094462124, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 8307853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13362, "largest_seqno": 18011, "table_properties": {"data_size": 8290133, "index_size": 11974, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36516, "raw_average_key_size": 19, "raw_value_size": 8253673, "raw_average_value_size": 4449, "num_data_blocks": 524, "num_entries": 1855, "num_filter_entries": 1855, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063652, "oldest_key_time": 1764063652, "file_creation_time": 1764064094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 17548 microseconds, and 9714 cpu microseconds.
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.462152) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 8307853 bytes OK
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.462164) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.462546) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.462556) EVENT_LOG_v1 {"time_micros": 1764064094462553, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.462566) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 12796385, prev total WAL file size 12796385, number of live WAL files 2.
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.464079) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(8113KB)], [27(11MB)]
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094464096, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 20016962, "oldest_snapshot_seqno": -1}
Nov 25 09:48:14 compute-1 nova_compute_init[228886]: INFO:nova_statedir:Applying nova statedir ownership
Nov 25 09:48:14 compute-1 nova_compute_init[228886]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 25 09:48:14 compute-1 nova_compute_init[228886]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 25 09:48:14 compute-1 nova_compute_init[228886]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 25 09:48:14 compute-1 nova_compute_init[228886]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 25 09:48:14 compute-1 nova_compute_init[228886]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 25 09:48:14 compute-1 nova_compute_init[228886]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 25 09:48:14 compute-1 nova_compute_init[228886]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 25 09:48:14 compute-1 nova_compute_init[228886]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 25 09:48:14 compute-1 nova_compute_init[228886]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 25 09:48:14 compute-1 nova_compute_init[228886]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 25 09:48:14 compute-1 nova_compute_init[228886]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:14 compute-1 nova_compute_init[228886]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 25 09:48:14 compute-1 nova_compute_init[228886]: INFO:nova_statedir:Nova statedir ownership complete
Nov 25 09:48:14 compute-1 systemd[1]: libpod-b13a7bf4902af606f64a5d4e9479d0761ab290f6b952ae80955f682f5a44557e.scope: Deactivated successfully.
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 5047 keys, 15136523 bytes, temperature: kUnknown
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094497619, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 15136523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15098073, "index_size": 24708, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 125990, "raw_average_key_size": 24, "raw_value_size": 15001933, "raw_average_value_size": 2972, "num_data_blocks": 1042, "num_entries": 5047, "num_filter_entries": 5047, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764064094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.497762) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15136523 bytes
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.498745) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 596.2 rd, 450.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(7.9, 11.2 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(4.2) write-amplify(1.8) OK, records in: 6069, records dropped: 1022 output_compression: NoCompression
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.498766) EVENT_LOG_v1 {"time_micros": 1764064094498752, "job": 14, "event": "compaction_finished", "compaction_time_micros": 33574, "compaction_time_cpu_micros": 21014, "output_level": 6, "num_output_files": 1, "total_output_size": 15136523, "num_input_records": 6069, "num_output_records": 5047, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094499666, "job": 14, "event": "table_file_deletion", "file_number": 29}
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094500845, "job": 14, "event": "table_file_deletion", "file_number": 27}
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.464056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.500871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.500874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.500875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.500876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:48:14 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:48:14.500877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:48:14 compute-1 podman[228897]: 2025-11-25 09:48:14.528854272 +0000 UTC m=+0.023564885 container died b13a7bf4902af606f64a5d4e9479d0761ab290f6b952ae80955f682f5a44557e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 09:48:14 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b13a7bf4902af606f64a5d4e9479d0761ab290f6b952ae80955f682f5a44557e-userdata-shm.mount: Deactivated successfully.
Nov 25 09:48:14 compute-1 systemd[1]: var-lib-containers-storage-overlay-128413e58a4dcc68a294ae6583febd09683968f22a05176df3f1277e14a3f198-merged.mount: Deactivated successfully.
Nov 25 09:48:14 compute-1 podman[228897]: 2025-11-25 09:48:14.54902487 +0000 UTC m=+0.043735463 container cleanup b13a7bf4902af606f64a5d4e9479d0761ab290f6b952ae80955f682f5a44557e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:48:14 compute-1 systemd[1]: libpod-conmon-b13a7bf4902af606f64a5d4e9479d0761ab290f6b952ae80955f682f5a44557e.scope: Deactivated successfully.
Nov 25 09:48:14 compute-1 sudo[228846]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:14.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:14 compute-1 nova_compute[228683]: 2025-11-25 09:48:14.769 228687 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 09:48:14 compute-1 nova_compute[228683]: 2025-11-25 09:48:14.769 228687 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 09:48:14 compute-1 nova_compute[228683]: 2025-11-25 09:48:14.769 228687 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 09:48:14 compute-1 nova_compute[228683]: 2025-11-25 09:48:14.770 228687 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 25 09:48:14 compute-1 nova_compute[228683]: 2025-11-25 09:48:14.875 228687 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:48:14 compute-1 nova_compute[228683]: 2025-11-25 09:48:14.885 228687 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:48:14 compute-1 nova_compute[228683]: 2025-11-25 09:48:14.885 228687 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 25 09:48:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:14 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:15 compute-1 sshd-session[199645]: Connection closed by 192.168.122.30 port 37460
Nov 25 09:48:15 compute-1 sshd-session[199642]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:48:15 compute-1 systemd[1]: session-51.scope: Deactivated successfully.
Nov 25 09:48:15 compute-1 systemd[1]: session-51.scope: Consumed 1min 40.264s CPU time.
Nov 25 09:48:15 compute-1 systemd-logind[746]: Session 51 logged out. Waiting for processes to exit.
Nov 25 09:48:15 compute-1 systemd-logind[746]: Removed session 51.
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.266 228687 INFO nova.virt.driver [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 25 09:48:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:15 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57680bfdb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.348 228687 INFO nova.compute.provider_config [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.354 228687 DEBUG oslo_concurrency.lockutils [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.354 228687 DEBUG oslo_concurrency.lockutils [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.354 228687 DEBUG oslo_concurrency.lockutils [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.355 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.355 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.355 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.355 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.355 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.355 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.356 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.356 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.356 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.356 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.356 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.356 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.356 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.357 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.357 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.357 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.357 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.357 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.357 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.357 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.358 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.358 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.358 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.358 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.358 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.358 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.358 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.359 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.359 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.359 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.359 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.359 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.359 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.359 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.360 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.360 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.360 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.360 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.360 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.360 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.360 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.361 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.361 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.361 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.361 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.361 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.361 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.362 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.362 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.362 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.362 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.362 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.362 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.362 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.363 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.363 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.363 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.363 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.363 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.363 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.363 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.363 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.364 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.364 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.364 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.364 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.364 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.364 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.364 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.364 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.365 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.365 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.365 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.365 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.365 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.365 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.365 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.366 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.366 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.366 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.366 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.366 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.366 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.366 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.367 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.367 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.367 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.367 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.367 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.367 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.367 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.367 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.368 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.368 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.368 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.368 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.368 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.368 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.368 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.369 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.369 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.369 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.369 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.369 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.369 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.369 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.369 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.370 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.370 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.370 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.370 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.370 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.370 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.370 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.370 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.371 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.371 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.371 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.371 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.371 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.371 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.371 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.372 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.372 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.372 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.372 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.372 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.372 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.372 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.372 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.373 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.373 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.373 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.373 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.373 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.373 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.373 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.374 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.374 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.374 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.374 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.374 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.374 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.374 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.374 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.375 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.375 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.375 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.375 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.375 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.375 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.375 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.376 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.376 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.376 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.376 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.376 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.376 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.376 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.377 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.377 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.377 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.377 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.377 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.377 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.377 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.377 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.378 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.378 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.378 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.378 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.378 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.378 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.378 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.379 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.379 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.379 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.379 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.379 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.379 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.379 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.380 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.380 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.380 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.380 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.380 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.380 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.380 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.381 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.381 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.381 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.381 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.381 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.381 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.381 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.381 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.382 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.382 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.382 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.382 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.382 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.382 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.382 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.383 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.383 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.383 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.383 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.383 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.383 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.383 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.383 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.384 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.384 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.384 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.384 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.384 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.384 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.384 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.385 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.385 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.385 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.385 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.385 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.385 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.385 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.386 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.386 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.386 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.386 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.386 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.386 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.386 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.387 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.387 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.387 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.387 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.387 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.387 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.387 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.388 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.388 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.388 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.388 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.388 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.388 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.388 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.389 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.389 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.389 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.389 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.389 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.389 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.389 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.389 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.390 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.390 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.390 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.390 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.390 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.390 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.390 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.391 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.391 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.391 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.391 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.391 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.391 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.391 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.392 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.392 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.392 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.392 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.392 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.392 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.392 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.392 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.393 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.393 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.393 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.393 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.393 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.393 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.393 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.394 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.394 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.394 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.394 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.394 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.394 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.394 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.395 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.395 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.395 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.395 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.395 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.395 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.395 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.395 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.396 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.396 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.396 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.396 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.396 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.396 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.396 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.397 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.397 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.397 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.397 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.397 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.397 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.397 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.398 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.398 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.398 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.398 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.398 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.398 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.398 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.398 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.399 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.399 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.399 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.399 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.399 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.399 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.399 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.400 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.400 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.400 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.400 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.400 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.400 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.400 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.400 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.401 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.401 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.401 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.401 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.401 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.401 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.401 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.402 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.402 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.402 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.402 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.402 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.402 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.402 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.403 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.403 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.403 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.403 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.403 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.403 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.404 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.404 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.404 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.404 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.404 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.404 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.404 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.405 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.405 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.405 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.405 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.405 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.405 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.405 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.406 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.406 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.406 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.406 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.406 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.406 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.406 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.406 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.407 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.407 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.407 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.407 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.407 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.407 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.407 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.407 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.408 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.408 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.408 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.408 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.408 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.408 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.408 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.409 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.409 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.409 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.409 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.409 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.409 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.409 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.410 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.410 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.410 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.410 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.410 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.410 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.410 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.411 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.411 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.411 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.411 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.411 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.411 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.411 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.412 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.412 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.412 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.412 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.412 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.412 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.412 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.412 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.413 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.413 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.413 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.413 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.413 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.413 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.413 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.414 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.414 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.414 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.414 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.414 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.414 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.415 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.415 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.415 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.415 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.415 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.415 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.416 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.416 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.416 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.416 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.416 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.416 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.416 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.417 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.417 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.417 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.417 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.417 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.417 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.417 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.418 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.418 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.418 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.418 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.418 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.418 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.418 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.419 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.419 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.419 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.419 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.419 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.419 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.419 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.420 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.420 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.420 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.420 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.420 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.420 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.420 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.421 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.421 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.421 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.421 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.421 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.421 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.421 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.422 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.422 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.422 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.422 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.422 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.422 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.422 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.423 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.423 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.423 228687 WARNING oslo_config.cfg [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 09:48:15 compute-1 nova_compute[228683]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 09:48:15 compute-1 nova_compute[228683]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 09:48:15 compute-1 nova_compute[228683]: and ``live_migration_inbound_addr`` respectively.
Nov 25 09:48:15 compute-1 nova_compute[228683]: ).  Its value may be silently ignored in the future.
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.423 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.423 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.423 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.424 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.424 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.424 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.424 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.424 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.424 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.425 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.425 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.425 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.425 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.425 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.426 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.426 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.426 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.426 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.426 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.rbd_secret_uuid        = af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.426 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.426 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.427 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.427 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.427 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.427 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.427 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.427 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.427 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.428 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.428 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.428 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.428 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.428 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.428 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.428 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.429 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.429 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.429 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.429 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.429 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.429 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.429 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.430 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.430 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.430 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.430 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.430 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.430 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.430 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.431 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.431 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.431 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.431 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.431 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.431 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.431 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.432 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.432 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.432 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.432 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.432 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.432 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.432 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.433 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.433 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.433 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.433 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.433 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.433 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.433 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.433 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.434 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.434 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.434 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.434 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.434 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.434 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.434 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.435 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.435 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.435 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.435 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.435 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.435 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.435 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.436 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.436 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.436 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.436 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.436 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.436 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.436 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.437 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.437 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.437 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.437 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.437 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.437 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.437 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.437 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.438 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.438 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.438 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.438 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.438 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.438 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.438 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.439 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.439 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.439 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.439 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.439 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.439 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.439 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.440 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.440 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.440 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.440 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.440 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.440 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.440 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.440 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.441 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.441 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.441 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.441 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.441 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.441 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.441 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.442 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.442 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.442 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.442 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.442 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.442 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.442 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.443 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.443 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.443 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.443 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.443 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.443 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.443 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.444 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.444 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.444 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.444 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.444 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.444 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.445 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.445 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.445 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.445 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.445 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.445 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.445 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.446 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.446 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.446 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.446 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.446 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.446 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.446 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.447 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.447 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.447 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.447 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.447 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.447 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.447 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.448 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.448 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.448 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.448 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.448 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.448 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.448 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.449 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.449 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.449 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.449 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.449 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.449 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.449 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.450 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.450 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.450 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.450 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.450 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.450 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.451 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.451 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.451 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.451 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.451 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.451 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.451 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.452 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.452 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.452 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.452 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.452 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.452 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.452 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.453 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.453 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.453 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.453 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.453 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.453 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.453 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.454 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.454 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.454 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.454 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.454 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.454 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.454 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.454 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.455 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.455 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.455 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.455 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.455 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.455 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.455 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.456 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.456 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.456 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.456 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.456 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.456 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.456 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.457 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.457 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.457 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.457 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.457 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.457 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.457 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.457 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.458 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.458 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.458 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.458 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.458 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.458 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.458 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.459 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.459 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.459 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.459 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.459 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.459 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.460 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.460 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.460 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.460 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.460 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.460 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.460 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.461 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.461 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.461 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.461 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.461 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.461 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.461 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.462 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.462 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.462 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.462 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.462 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.462 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.462 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.462 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.463 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.463 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.463 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.463 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.463 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.463 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.463 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.464 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.464 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.464 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.464 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.464 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.464 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.464 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.465 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.465 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.465 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.465 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.465 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.465 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.466 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.466 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.466 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.466 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.466 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.466 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.466 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.467 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.467 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.467 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.467 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.467 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.467 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.467 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.468 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.468 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.468 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.468 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.468 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.468 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.468 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.468 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.469 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.469 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.469 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.469 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.469 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.469 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.469 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.470 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.470 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.470 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.470 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.470 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.470 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.470 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.471 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.471 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.471 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.471 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.471 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.471 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.471 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.472 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.472 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.472 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.472 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.472 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.472 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.472 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.473 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.473 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.473 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.473 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.473 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.473 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.473 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.473 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.474 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.474 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.474 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.474 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.474 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.474 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.474 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.475 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.475 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.475 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.475 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.475 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.475 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.475 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.476 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.476 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.476 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.476 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.476 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.476 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.476 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.476 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.477 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.477 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.477 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.477 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.477 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.477 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.478 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.478 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.478 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.478 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.478 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.478 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.478 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.479 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.479 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.479 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.479 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.479 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.479 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.479 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.480 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.480 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.480 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.480 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.480 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.480 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.480 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.480 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.481 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.481 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.481 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.481 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.481 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.481 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.481 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.482 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.482 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.482 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.482 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.482 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.482 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.482 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.483 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.483 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.483 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.483 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.483 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.483 228687 DEBUG oslo_service.service [None req-646178c5-3a26-4773-9441-c86488610487 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.484 228687 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.494 228687 INFO nova.virt.node [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Determined node identity 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 from /var/lib/nova/compute_id
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.495 228687 DEBUG nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.495 228687 DEBUG nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.495 228687 DEBUG nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.495 228687 DEBUG nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.504 228687 DEBUG nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fa3f1bc1430> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.506 228687 DEBUG nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fa3f1bc1430> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.506 228687 INFO nova.virt.libvirt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Connection event '1' reason 'None'
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.510 228687 INFO nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Libvirt host capabilities <capabilities>
Nov 25 09:48:15 compute-1 nova_compute[228683]: 
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <host>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <uuid>3702d874-fa35-45b6-9e4f-9523fd2bec51</uuid>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <cpu>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <arch>x86_64</arch>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model>EPYC-Milan-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <vendor>AMD</vendor>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <microcode version='167776725'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <signature family='25' model='1' stepping='1'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <maxphysaddr mode='emulate' bits='48'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='x2apic'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='tsc-deadline'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='osxsave'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='hypervisor'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='tsc_adjust'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='ospke'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='vaes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='vpclmulqdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='spec-ctrl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='stibp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='arch-capabilities'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='ssbd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='cmp_legacy'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='virt-ssbd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='lbrv'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='tsc-scale'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='vmcb-clean'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='pause-filter'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='pfthreshold'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='v-vmsave-vmload'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='vgif'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='rdctl-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='skip-l1dfl-vmentry'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='mds-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature name='pschange-mc-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <pages unit='KiB' size='4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <pages unit='KiB' size='2048'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <pages unit='KiB' size='1048576'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </cpu>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <power_management>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <suspend_mem/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </power_management>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <iommu support='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <migration_features>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <live/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <uri_transports>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <uri_transport>tcp</uri_transport>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <uri_transport>rdma</uri_transport>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </uri_transports>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </migration_features>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <topology>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <cells num='1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <cell id='0'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:           <memory unit='KiB'>7865372</memory>
Nov 25 09:48:15 compute-1 nova_compute[228683]:           <pages unit='KiB' size='4'>1966343</pages>
Nov 25 09:48:15 compute-1 nova_compute[228683]:           <pages unit='KiB' size='2048'>0</pages>
Nov 25 09:48:15 compute-1 nova_compute[228683]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 25 09:48:15 compute-1 nova_compute[228683]:           <distances>
Nov 25 09:48:15 compute-1 nova_compute[228683]:             <sibling id='0' value='10'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:           </distances>
Nov 25 09:48:15 compute-1 nova_compute[228683]:           <cpus num='4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:           </cpus>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         </cell>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </cells>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </topology>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <cache>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </cache>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <secmodel>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model>selinux</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <doi>0</doi>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </secmodel>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <secmodel>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model>dac</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <doi>0</doi>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </secmodel>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </host>
Nov 25 09:48:15 compute-1 nova_compute[228683]: 
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <guest>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <os_type>hvm</os_type>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <arch name='i686'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <wordsize>32</wordsize>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <domain type='qemu'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <domain type='kvm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </arch>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <features>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <pae/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <nonpae/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <acpi default='on' toggle='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <apic default='on' toggle='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <cpuselection/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <deviceboot/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <disksnapshot default='on' toggle='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <externalSnapshot/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </features>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </guest>
Nov 25 09:48:15 compute-1 nova_compute[228683]: 
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <guest>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <os_type>hvm</os_type>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <arch name='x86_64'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <wordsize>64</wordsize>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <domain type='qemu'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <domain type='kvm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </arch>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <features>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <acpi default='on' toggle='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <apic default='on' toggle='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <cpuselection/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <deviceboot/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <disksnapshot default='on' toggle='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <externalSnapshot/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </features>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </guest>
Nov 25 09:48:15 compute-1 nova_compute[228683]: 
Nov 25 09:48:15 compute-1 nova_compute[228683]: </capabilities>
Nov 25 09:48:15 compute-1 nova_compute[228683]: 
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.517 228687 DEBUG nova.virt.libvirt.volume.mount [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.520 228687 DEBUG nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.522 228687 DEBUG nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 25 09:48:15 compute-1 nova_compute[228683]: <domainCapabilities>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <domain>kvm</domain>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <arch>i686</arch>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <vcpu max='4096'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <iothreads supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <os supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <enum name='firmware'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <loader supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>rom</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pflash</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='readonly'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>yes</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>no</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='secure'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>no</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </loader>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </os>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <cpu>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>on</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>off</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='maximumMigratable'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>on</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>off</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <vendor>AMD</vendor>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='succor'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='custom' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cooperlake'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Denverton'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Denverton-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amd-psfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='auto-ibrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='stibp-always-on'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amd-psfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='auto-ibrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='stibp-always-on'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amd-psfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='stibp-always-on'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='GraniteRapids'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='prefetchiti'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='prefetchiti'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10-128'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10-256'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10-512'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='prefetchiti'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='KnightsMill'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512er'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512pf'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512er'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512pf'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tbm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tbm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SierraForest'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cmpccxadd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cmpccxadd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='athlon'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='athlon-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='core2duo'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='core2duo-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='coreduo'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='coreduo-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='n270'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='n270-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='phenom'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='phenom-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </cpu>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <memoryBacking supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <enum name='sourceType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>file</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>anonymous</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>memfd</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </memoryBacking>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <devices>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <disk supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='diskDevice'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>disk</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>cdrom</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>floppy</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>lun</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='bus'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>fdc</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>scsi</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>usb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>sata</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-non-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </disk>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <graphics supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vnc</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>egl-headless</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>dbus</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </graphics>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <video supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='modelType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vga</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>cirrus</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>none</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>bochs</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>ramfb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </video>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <hostdev supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='mode'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>subsystem</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='startupPolicy'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>default</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>mandatory</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>requisite</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>optional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='subsysType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>usb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pci</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>scsi</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='capsType'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='pciBackend'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </hostdev>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <rng supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-non-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendModel'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>random</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>egd</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>builtin</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </rng>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <filesystem supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='driverType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>path</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>handle</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtiofs</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </filesystem>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <tpm supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tpm-tis</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tpm-crb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendModel'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>emulator</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>external</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendVersion'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>2.0</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </tpm>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <redirdev supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='bus'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>usb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </redirdev>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <channel supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pty</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>unix</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </channel>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <crypto supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>qemu</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendModel'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>builtin</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </crypto>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <interface supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>default</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>passt</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </interface>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <panic supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>isa</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>hyperv</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </panic>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <console supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>null</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vc</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pty</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>dev</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>file</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pipe</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>stdio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>udp</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tcp</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>unix</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>qemu-vdagent</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>dbus</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </console>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </devices>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <features>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <gic supported='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <genid supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <backup supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <async-teardown supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <ps2 supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <sev supported='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <sgx supported='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <hyperv supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='features'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>relaxed</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vapic</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>spinlocks</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vpindex</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>runtime</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>synic</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>stimer</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>reset</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vendor_id</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>frequencies</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>reenlightenment</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tlbflush</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>ipi</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>avic</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>emsr_bitmap</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>xmm_input</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <defaults>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </defaults>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </hyperv>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <launchSecurity supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='sectype'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tdx</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </launchSecurity>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </features>
Nov 25 09:48:15 compute-1 nova_compute[228683]: </domainCapabilities>
Nov 25 09:48:15 compute-1 nova_compute[228683]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.527 228687 DEBUG nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 25 09:48:15 compute-1 nova_compute[228683]: <domainCapabilities>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <domain>kvm</domain>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <arch>i686</arch>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <vcpu max='240'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <iothreads supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <os supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <enum name='firmware'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <loader supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>rom</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pflash</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='readonly'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>yes</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>no</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='secure'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>no</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </loader>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </os>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <cpu>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>on</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>off</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='maximumMigratable'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>on</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>off</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <vendor>AMD</vendor>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='succor'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='custom' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cooperlake'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Denverton'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Denverton-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amd-psfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='auto-ibrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='stibp-always-on'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amd-psfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='auto-ibrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='stibp-always-on'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amd-psfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='stibp-always-on'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='GraniteRapids'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='prefetchiti'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='prefetchiti'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10-128'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10-256'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10-512'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='prefetchiti'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='KnightsMill'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512er'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512pf'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512er'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512pf'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tbm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tbm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SierraForest'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cmpccxadd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cmpccxadd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='athlon'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='athlon-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='core2duo'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='core2duo-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='coreduo'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='coreduo-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='n270'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='n270-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='phenom'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='phenom-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </cpu>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <memoryBacking supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <enum name='sourceType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>file</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>anonymous</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>memfd</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </memoryBacking>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <devices>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <disk supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='diskDevice'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>disk</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>cdrom</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>floppy</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>lun</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='bus'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>ide</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>fdc</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>scsi</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>usb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>sata</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-non-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </disk>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <graphics supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vnc</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>egl-headless</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>dbus</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </graphics>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <video supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='modelType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vga</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>cirrus</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>none</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>bochs</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>ramfb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </video>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <hostdev supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='mode'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>subsystem</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='startupPolicy'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>default</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>mandatory</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>requisite</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>optional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='subsysType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>usb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pci</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>scsi</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='capsType'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='pciBackend'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </hostdev>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <rng supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-non-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendModel'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>random</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>egd</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>builtin</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </rng>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <filesystem supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='driverType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>path</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>handle</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtiofs</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </filesystem>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <tpm supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tpm-tis</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tpm-crb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendModel'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>emulator</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>external</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendVersion'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>2.0</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </tpm>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <redirdev supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='bus'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>usb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </redirdev>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <channel supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pty</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>unix</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </channel>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <crypto supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>qemu</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendModel'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>builtin</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </crypto>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <interface supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>default</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>passt</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </interface>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <panic supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>isa</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>hyperv</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </panic>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <console supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>null</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vc</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pty</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>dev</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>file</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pipe</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>stdio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>udp</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tcp</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>unix</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>qemu-vdagent</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>dbus</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </console>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </devices>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <features>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <gic supported='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <genid supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <backup supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <async-teardown supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <ps2 supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <sev supported='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <sgx supported='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <hyperv supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='features'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>relaxed</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vapic</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>spinlocks</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vpindex</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>runtime</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>synic</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>stimer</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>reset</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vendor_id</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>frequencies</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>reenlightenment</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tlbflush</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>ipi</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>avic</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>emsr_bitmap</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>xmm_input</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <defaults>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </defaults>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </hyperv>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <launchSecurity supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='sectype'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tdx</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </launchSecurity>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </features>
Nov 25 09:48:15 compute-1 nova_compute[228683]: </domainCapabilities>
Nov 25 09:48:15 compute-1 nova_compute[228683]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.531 228687 DEBUG nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.534 228687 DEBUG nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 25 09:48:15 compute-1 nova_compute[228683]: <domainCapabilities>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <domain>kvm</domain>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <arch>x86_64</arch>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <vcpu max='4096'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <iothreads supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <os supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <enum name='firmware'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>efi</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <loader supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>rom</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pflash</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='readonly'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>yes</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>no</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='secure'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>yes</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>no</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </loader>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </os>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <cpu>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>on</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>off</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='maximumMigratable'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>on</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>off</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <vendor>AMD</vendor>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='succor'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='custom' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cooperlake'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Denverton'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Denverton-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amd-psfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='auto-ibrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='stibp-always-on'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amd-psfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='auto-ibrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='stibp-always-on'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amd-psfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='stibp-always-on'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='GraniteRapids'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='prefetchiti'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='prefetchiti'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10-128'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10-256'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10-512'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='prefetchiti'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='KnightsMill'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512er'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512pf'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512er'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512pf'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tbm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tbm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SierraForest'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cmpccxadd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cmpccxadd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='athlon'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='athlon-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='core2duo'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='core2duo-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='coreduo'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='coreduo-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='n270'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='n270-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='phenom'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='phenom-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </cpu>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <memoryBacking supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <enum name='sourceType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>file</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>anonymous</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>memfd</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </memoryBacking>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <devices>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <disk supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='diskDevice'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>disk</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>cdrom</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>floppy</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>lun</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='bus'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>fdc</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>scsi</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>usb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>sata</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-non-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </disk>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <graphics supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vnc</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>egl-headless</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>dbus</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </graphics>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <video supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='modelType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vga</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>cirrus</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>none</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>bochs</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>ramfb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </video>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <hostdev supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='mode'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>subsystem</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='startupPolicy'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>default</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>mandatory</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>requisite</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>optional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='subsysType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>usb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pci</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>scsi</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='capsType'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='pciBackend'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </hostdev>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <rng supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-non-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendModel'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>random</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>egd</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>builtin</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </rng>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <filesystem supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='driverType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>path</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>handle</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtiofs</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </filesystem>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <tpm supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tpm-tis</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tpm-crb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendModel'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>emulator</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>external</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendVersion'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>2.0</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </tpm>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <redirdev supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='bus'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>usb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </redirdev>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <channel supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pty</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>unix</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </channel>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <crypto supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>qemu</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendModel'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>builtin</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </crypto>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <interface supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>default</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>passt</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </interface>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <panic supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>isa</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>hyperv</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </panic>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <console supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>null</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vc</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pty</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>dev</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>file</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pipe</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>stdio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>udp</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tcp</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>unix</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>qemu-vdagent</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>dbus</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </console>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </devices>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <features>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <gic supported='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <genid supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <backup supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <async-teardown supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <ps2 supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <sev supported='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <sgx supported='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <hyperv supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='features'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>relaxed</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vapic</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>spinlocks</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vpindex</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>runtime</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>synic</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>stimer</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>reset</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vendor_id</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>frequencies</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>reenlightenment</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tlbflush</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>ipi</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>avic</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>emsr_bitmap</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>xmm_input</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <defaults>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </defaults>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </hyperv>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <launchSecurity supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='sectype'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tdx</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </launchSecurity>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </features>
Nov 25 09:48:15 compute-1 nova_compute[228683]: </domainCapabilities>
Nov 25 09:48:15 compute-1 nova_compute[228683]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.579 228687 DEBUG nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 25 09:48:15 compute-1 nova_compute[228683]: <domainCapabilities>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <domain>kvm</domain>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <arch>x86_64</arch>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <vcpu max='240'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <iothreads supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <os supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <enum name='firmware'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <loader supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>rom</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pflash</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='readonly'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>yes</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>no</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='secure'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>no</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </loader>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </os>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <cpu>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>on</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>off</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='maximumMigratable'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>on</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>off</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <vendor>AMD</vendor>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='succor'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <mode name='custom' supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cooperlake'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Denverton'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Denverton-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amd-psfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='auto-ibrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='stibp-always-on'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amd-psfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='auto-ibrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='stibp-always-on'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amd-psfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='stibp-always-on'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='GraniteRapids'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='prefetchiti'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='prefetchiti'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10-128'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10-256'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx10-512'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='prefetchiti'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Haswell-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='KnightsMill'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512er'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512pf'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512er'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512pf'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tbm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fma4'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tbm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xop'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='amx-tile'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-bf16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-fp16'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bitalg'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrc'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fzrm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='la57'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='taa-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='xfd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SierraForest'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cmpccxadd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ifma'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cmpccxadd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fbsdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='fsrs'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ibrs-all'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mcdt-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='pbrsb-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='psdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='serialize'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='hle'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='rtm'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512bw'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512cd'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512dq'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512f'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='avx512vl'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='mpx'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='core-capability'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='split-lock-detect'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='cldemote'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='gfni'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdir64b'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='movdiri'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='athlon'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='athlon-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='core2duo'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='core2duo-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='coreduo'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='coreduo-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='n270'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='n270-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='ss'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='phenom'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <blockers model='phenom-v1'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnow'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <feature name='3dnowext'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </blockers>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </mode>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </cpu>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <memoryBacking supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <enum name='sourceType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>file</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>anonymous</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <value>memfd</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </memoryBacking>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <devices>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <disk supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='diskDevice'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>disk</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>cdrom</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>floppy</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>lun</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='bus'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>ide</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>fdc</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>scsi</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>usb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>sata</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-non-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </disk>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <graphics supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vnc</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>egl-headless</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>dbus</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </graphics>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <video supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='modelType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vga</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>cirrus</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>none</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>bochs</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>ramfb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </video>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <hostdev supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='mode'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>subsystem</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='startupPolicy'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>default</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>mandatory</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>requisite</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>optional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='subsysType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>usb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pci</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>scsi</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='capsType'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='pciBackend'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </hostdev>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <rng supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtio-non-transitional</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendModel'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>random</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>egd</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>builtin</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </rng>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <filesystem supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='driverType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>path</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>handle</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>virtiofs</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </filesystem>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <tpm supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tpm-tis</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tpm-crb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendModel'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>emulator</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>external</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendVersion'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>2.0</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </tpm>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <redirdev supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='bus'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>usb</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </redirdev>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <channel supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pty</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>unix</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </channel>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <crypto supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>qemu</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendModel'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>builtin</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </crypto>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <interface supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='backendType'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>default</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>passt</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </interface>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <panic supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='model'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>isa</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>hyperv</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </panic>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <console supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='type'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>null</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vc</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pty</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>dev</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>file</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>pipe</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>stdio</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>udp</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tcp</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>unix</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>qemu-vdagent</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>dbus</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </console>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </devices>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   <features>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <gic supported='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <genid supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <backup supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <async-teardown supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <ps2 supported='yes'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <sev supported='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <sgx supported='no'/>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <hyperv supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='features'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>relaxed</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vapic</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>spinlocks</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vpindex</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>runtime</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>synic</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>stimer</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>reset</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>vendor_id</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>frequencies</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>reenlightenment</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tlbflush</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>ipi</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>avic</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>emsr_bitmap</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>xmm_input</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <defaults>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </defaults>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </hyperv>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     <launchSecurity supported='yes'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       <enum name='sectype'>
Nov 25 09:48:15 compute-1 nova_compute[228683]:         <value>tdx</value>
Nov 25 09:48:15 compute-1 nova_compute[228683]:       </enum>
Nov 25 09:48:15 compute-1 nova_compute[228683]:     </launchSecurity>
Nov 25 09:48:15 compute-1 nova_compute[228683]:   </features>
Nov 25 09:48:15 compute-1 nova_compute[228683]: </domainCapabilities>
Nov 25 09:48:15 compute-1 nova_compute[228683]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.623 228687 DEBUG nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.623 228687 INFO nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Secure Boot support detected
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.624 228687 INFO nova.virt.libvirt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.624 228687 INFO nova.virt.libvirt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.630 228687 DEBUG nova.virt.libvirt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.663 228687 INFO nova.virt.node [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Determined node identity 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 from /var/lib/nova/compute_id
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.682 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Verified node 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 matches my host compute-1.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.701 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.769 228687 DEBUG oslo_concurrency.lockutils [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.769 228687 DEBUG oslo_concurrency.lockutils [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.770 228687 DEBUG oslo_concurrency.lockutils [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.770 228687 DEBUG nova.compute.resource_tracker [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:48:15 compute-1 nova_compute[228683]: 2025-11-25 09:48:15.770 228687 DEBUG oslo_concurrency.processutils [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:48:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:16.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:16 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:48:16 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3733969013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.115 228687 DEBUG oslo_concurrency.processutils [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:48:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:16 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c005750 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.362 228687 WARNING nova.virt.libvirt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.364 228687 DEBUG nova.compute.resource_tracker [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5287MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.364 228687 DEBUG oslo_concurrency.lockutils [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.364 228687 DEBUG oslo_concurrency.lockutils [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:48:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094816 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.411 228687 DEBUG nova.compute.resource_tracker [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.411 228687 DEBUG nova.compute.resource_tracker [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:48:16 compute-1 ceph-mon[79643]: pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:48:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3733969013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.461 228687 DEBUG nova.scheduler.client.report [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Refreshing inventories for resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.525 228687 DEBUG nova.scheduler.client.report [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Updating ProviderTree inventory for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 from _refresh_and_get_inventory using data: {} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.525 228687 DEBUG nova.compute.provider_tree [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.538 228687 DEBUG nova.scheduler.client.report [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Refreshing aggregate associations for resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.550 228687 DEBUG nova.scheduler.client.report [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Refreshing trait associations for resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7, traits: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.560 228687 DEBUG oslo_concurrency.processutils [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:48:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:48:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:16.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:48:16 compute-1 podman[229010]: 2025-11-25 09:48:16.807895871 +0000 UTC m=+0.059538929 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 09:48:16 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:48:16 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3534242528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.895 228687 DEBUG oslo_concurrency.processutils [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.899 228687 DEBUG nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 25 09:48:16 compute-1 nova_compute[228683]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.899 228687 INFO nova.virt.libvirt.host [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] kernel doesn't support AMD SEV
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.900 228687 DEBUG nova.compute.provider_tree [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Updating inventory in ProviderTree for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 with inventory: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.900 228687 DEBUG nova.virt.libvirt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.944 228687 DEBUG nova.scheduler.client.report [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Updated inventory for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.944 228687 DEBUG nova.compute.provider_tree [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Updating resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 09:48:16 compute-1 nova_compute[228683]: 2025-11-25 09:48:16.945 228687 DEBUG nova.compute.provider_tree [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Updating inventory in ProviderTree for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 with inventory: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:48:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:16 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c005750 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:17 compute-1 nova_compute[228683]: 2025-11-25 09:48:17.023 228687 DEBUG nova.compute.provider_tree [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Updating resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 09:48:17 compute-1 nova_compute[228683]: 2025-11-25 09:48:17.046 228687 DEBUG nova.compute.resource_tracker [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:48:17 compute-1 nova_compute[228683]: 2025-11-25 09:48:17.046 228687 DEBUG oslo_concurrency.lockutils [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:48:17 compute-1 nova_compute[228683]: 2025-11-25 09:48:17.046 228687 DEBUG nova.service [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 25 09:48:17 compute-1 nova_compute[228683]: 2025-11-25 09:48:17.086 228687 DEBUG nova.service [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 25 09:48:17 compute-1 nova_compute[228683]: 2025-11-25 09:48:17.086 228687 DEBUG nova.servicegroup.drivers.db [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 25 09:48:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:17 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3156523680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3114636363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3534242528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/712952052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:48:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:18.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:48:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:18 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57680c1220 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:18 compute-1 ceph-mon[79643]: pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:48:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1737525609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:18.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:18 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c005750 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:19 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c0068d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:20.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:20 compute-1 ceph-mon[79643]: pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:48:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:20.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:20 compute-1 sudo[229037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:48:20 compute-1 sudo[229037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:20 compute-1 sudo[229037]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:21 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c006d70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:21 compute-1 podman[229063]: 2025-11-25 09:48:21.788057753 +0000 UTC m=+0.039058258 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:48:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:22.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:22 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c006d70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:22 compute-1 ceph-mon[79643]: pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:48:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:22.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:22 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57680c1220 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:23 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:24.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:24 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:24 compute-1 ceph-mon[79643]: pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:48:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:24.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:24 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c006a50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:25 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c006a50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:26.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:26 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c006a50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:26 compute-1 ceph-mon[79643]: pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:48:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:26.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:26 compute-1 sudo[229082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:48:26 compute-1 sudo[229082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:26 compute-1 sudo[229082]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:26 compute-1 sudo[229107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:48:26 compute-1 sudo[229107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:26 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:27 compute-1 sudo[229107]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:27 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57680c1f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:27 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:48:27 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:48:27 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:48:27 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:48:27 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:48:27 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:48:27 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:48:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:28.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:28 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c007690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:28 compute-1 ceph-mon[79643]: pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:48:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:28.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:28 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c006a50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:29 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:30.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:30 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57680c1f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:30 compute-1 ceph-mon[79643]: pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:48:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:48:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:48:30 compute-1 sudo[229163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:48:30 compute-1 sudo[229163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:30 compute-1 sudo[229163]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:30.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:30 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c007690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:31 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c006a50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:48:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:32.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:48:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:32 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:32 compute-1 ceph-mon[79643]: pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:48:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:32.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:32 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57680c1f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:33 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c007690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:34.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:34 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c006a50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:34 compute-1 ceph-mon[79643]: pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:48:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:34.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:48:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:34 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:35 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57680c1f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:48:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:36.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:48:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:36 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c007690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:36 compute-1 ceph-mon[79643]: pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:48:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:36.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:48:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:36 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c006a50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:37 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:38.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:38 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744001ff0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:38 compute-1 ceph-mon[79643]: pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:48:38 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/3835819779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:48:38 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/3835819779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:48:38 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/2669008744' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:48:38 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/2669008744' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:48:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:38.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:38 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c007690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:39 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c006a50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:39 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1190347804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:48:39 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1190347804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:48:39 compute-1 podman[229194]: 2025-11-25 09:48:39.783900983 +0000 UTC m=+0.035236070 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 09:48:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:48:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:40.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:48:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:40 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:40 compute-1 ceph-mon[79643]: pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:40.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:40 compute-1 sudo[229210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:48:40 compute-1 sudo[229210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:40 compute-1 sudo[229210]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:41 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744001ff0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:41 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c007690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:42.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:42 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c007690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:42 compute-1 ceph-mon[79643]: pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:48:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:42.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:43 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:43 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744002ee0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:44.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:44 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5748002320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:44 compute-1 ceph-mon[79643]: pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:44.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:45 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c007690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:45 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:48:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:46.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:46 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:46 compute-1 ceph-mon[79643]: pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:46.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:47 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5748002320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:47 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c007690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094847 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:48:47 compute-1 podman[229240]: 2025-11-25 09:48:47.80201537 +0000 UTC m=+0.056478574 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:48:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:48:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:48.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:48:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:48 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f576c007690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:48 compute-1 ceph-mon[79643]: pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:48:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:48.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:49 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:49 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:48:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:50.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:48:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:50 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57740023d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:50 compute-1 ceph-mon[79643]: pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:48:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:50.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:48:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:51 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5778000df0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:51 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744002f00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:52.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:52 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:52 compute-1 ceph-mon[79643]: pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:48:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:52.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:52 compute-1 podman[229267]: 2025-11-25 09:48:52.791242986 +0000 UTC m=+0.040085670 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:48:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:53 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774002fa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:53 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774002fa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:48:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:54.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:48:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:54 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:54 compute-1 ceph-mon[79643]: pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:54.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:55 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:55 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774002fa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:48:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:56.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:48:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:56 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774002fa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:56 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:48:56 compute-1 ceph-mon[79643]: pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:48:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:56.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:48:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:57 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:57 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:58.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:58 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774002fa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:58 compute-1 ceph-mon[79643]: pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:48:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:48:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:48:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:58.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:48:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:59 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5778001d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:59 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:59 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:48:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:48:59 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:49:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:49:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:00.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:49:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:00 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:00 compute-1 ceph-mon[79643]: pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:49:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:49:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:00.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:00 compute-1 sudo[229288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:49:00 compute-1 sudo[229288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:00 compute-1 sudo[229288]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:01 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774004f90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:01 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5778003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:02.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:02 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5778003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:02 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:49:02 compute-1 ceph-mon[79643]: pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:49:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:02.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:03 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:03 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57740058b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:04.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:04 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57740058b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:04 compute-1 ceph-mon[79643]: pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:49:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:04.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:49:04.995 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:49:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:49:04.996 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:49:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:49:04.996 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:49:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:05 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57740058b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:05 compute-1 nova_compute[228683]: 2025-11-25 09:49:05.088 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:05 compute-1 nova_compute[228683]: 2025-11-25 09:49:05.160 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:05 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:06.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:06 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57740058b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:06 compute-1 ceph-mon[79643]: pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:49:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:06.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:07 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57740058b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:07 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5778003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094907 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.841526) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147841567, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 779, "num_deletes": 250, "total_data_size": 1566619, "memory_usage": 1590232, "flush_reason": "Manual Compaction"}
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147845447, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 693820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18016, "largest_seqno": 18790, "table_properties": {"data_size": 690614, "index_size": 1050, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8478, "raw_average_key_size": 20, "raw_value_size": 683831, "raw_average_value_size": 1628, "num_data_blocks": 46, "num_entries": 420, "num_filter_entries": 420, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064095, "oldest_key_time": 1764064095, "file_creation_time": 1764064147, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 3970 microseconds, and 2632 cpu microseconds.
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.845492) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 693820 bytes OK
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.845514) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.845855) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.845866) EVENT_LOG_v1 {"time_micros": 1764064147845862, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.845882) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1562488, prev total WAL file size 1562488, number of live WAL files 2.
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.846302) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(677KB)], [30(14MB)]
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147846329, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 15830343, "oldest_snapshot_seqno": -1}
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4974 keys, 12085796 bytes, temperature: kUnknown
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147873033, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 12085796, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12051783, "index_size": 20463, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 124889, "raw_average_key_size": 25, "raw_value_size": 11960895, "raw_average_value_size": 2404, "num_data_blocks": 856, "num_entries": 4974, "num_filter_entries": 4974, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764064147, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.873204) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12085796 bytes
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.873549) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 591.6 rd, 451.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 14.4 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(40.2) write-amplify(17.4) OK, records in: 5467, records dropped: 493 output_compression: NoCompression
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.873563) EVENT_LOG_v1 {"time_micros": 1764064147873556, "job": 16, "event": "compaction_finished", "compaction_time_micros": 26758, "compaction_time_cpu_micros": 18936, "output_level": 6, "num_output_files": 1, "total_output_size": 12085796, "num_input_records": 5467, "num_output_records": 4974, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147873738, "job": 16, "event": "table_file_deletion", "file_number": 32}
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147875617, "job": 16, "event": "table_file_deletion", "file_number": 30}
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.846268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.875677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.875682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.875684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.875685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:49:07 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:49:07.875686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:49:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:08.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:08 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:08 compute-1 ceph-mon[79643]: pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:49:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:08.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:09 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:09 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57740058b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:10.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:10 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5778005070 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:10 compute-1 ceph-mon[79643]: pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:49:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:10.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:10 compute-1 podman[229318]: 2025-11-25 09:49:10.782069854 +0000 UTC m=+0.038487948 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:49:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:11 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:11 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:12.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:12 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:12 compute-1 ceph-mon[79643]: pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:49:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:12.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:13 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5778005070 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:13 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:14.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:14 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:14 compute-1 ceph-mon[79643]: pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:49:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:14.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.895 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.895 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.895 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.920 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.920 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.921 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.921 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.921 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.921 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.922 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.922 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.922 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.955 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.955 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.955 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.955 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:49:14 compute-1 nova_compute[228683]: 2025-11-25 09:49:14.955 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:49:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:15 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:15 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:49:15 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2953543723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:15 compute-1 nova_compute[228683]: 2025-11-25 09:49:15.289 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:49:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:15 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:15 compute-1 nova_compute[228683]: 2025-11-25 09:49:15.486 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:49:15 compute-1 nova_compute[228683]: 2025-11-25 09:49:15.487 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5298MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:49:15 compute-1 nova_compute[228683]: 2025-11-25 09:49:15.488 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:49:15 compute-1 nova_compute[228683]: 2025-11-25 09:49:15.488 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:49:15 compute-1 nova_compute[228683]: 2025-11-25 09:49:15.637 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:49:15 compute-1 nova_compute[228683]: 2025-11-25 09:49:15.637 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:49:15 compute-1 nova_compute[228683]: 2025-11-25 09:49:15.685 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:49:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:49:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2953543723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:16 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:49:16 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2435061998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:16 compute-1 nova_compute[228683]: 2025-11-25 09:49:16.021 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:49:16 compute-1 nova_compute[228683]: 2025-11-25 09:49:16.025 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:49:16 compute-1 nova_compute[228683]: 2025-11-25 09:49:16.053 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:49:16 compute-1 nova_compute[228683]: 2025-11-25 09:49:16.054 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:49:16 compute-1 nova_compute[228683]: 2025-11-25 09:49:16.054 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:49:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:16.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:16 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:16 compute-1 ceph-mon[79643]: pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:49:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3660985114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4116950350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2435061998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3521299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1721534697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:16.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:17 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:17 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:49:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:18.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:49:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:18 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:18 compute-1 ceph-mon[79643]: pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:49:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:18.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:18 compute-1 podman[229382]: 2025-11-25 09:49:18.832039198 +0000 UTC m=+0.083009366 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:49:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:19 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:19 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f575c006c60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:20.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:20 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005a50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:20 compute-1 ceph-mon[79643]: pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/2422871279' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 09:49:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/2201242462' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 09:49:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:20.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:20 compute-1 sudo[229407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:49:20 compute-1 sudo[229407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:20 compute-1 sudo[229407]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:21 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:21 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5748002320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:21 compute-1 ceph-mon[79643]: from='client.15132 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 09:49:21 compute-1 ceph-mon[79643]: from='client.15132 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Nov 25 09:49:21 compute-1 ceph-mon[79643]: from='client.24751 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 09:49:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:22.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:22 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5748002320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:22.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:22 compute-1 ceph-mon[79643]: pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:23 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005a50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:23 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:23 compute-1 podman[229434]: 2025-11-25 09:49:23.78475939 +0000 UTC m=+0.035890693 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 25 09:49:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:24.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:24 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:24.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:24 compute-1 ceph-mon[79643]: pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:25 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:25 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005a50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:49:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:26.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:49:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:26 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005a50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:26.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:26 compute-1 ceph-mon[79643]: pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:27 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005a50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:27 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:28.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:28 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5780004aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:28.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:28 compute-1 ceph-mon[79643]: pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:49:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:29 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005a50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:29 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:49:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:30.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:49:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:30 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:30 compute-1 sudo[229456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:49:30 compute-1 sudo[229456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:30 compute-1 sudo[229456]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:30 compute-1 sudo[229481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:49:30 compute-1 sudo[229481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:30.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:30 compute-1 ceph-mon[79643]: pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:49:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:31 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57800055a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:31 compute-1 sudo[229481]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:31 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:32.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:32 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:32.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:32 compute-1 ceph-mon[79643]: pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:32 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:32 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:33 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:33 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57800055a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:49:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:49:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:49:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:49:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:49:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:34.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:34 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:34.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:34 compute-1 ceph-mon[79643]: pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:35 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:35 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:49:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:36.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:49:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:36 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57800055a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:36 compute-1 sudo[229538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:49:36 compute-1 sudo[229538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:36 compute-1 sudo[229538]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:36.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:36 compute-1 ceph-mon[79643]: pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:36 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:36 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:36 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/3958313225' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 09:49:36 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/2382711148' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 09:49:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:37 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:37 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:37 compute-1 ceph-mon[79643]: from='client.24757 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 09:49:37 compute-1 ceph-mon[79643]: from='client.24760 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 09:49:37 compute-1 ceph-mon[79643]: from='client.24757 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Nov 25 09:49:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:38.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:38 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:38.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:38 compute-1 ceph-mon[79643]: pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:49:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:39 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57800066a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:39 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:40.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:40 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:40.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:40 compute-1 ceph-mon[79643]: pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:41 compute-1 sudo[229565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:49:41 compute-1 sudo[229565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:41 compute-1 sudo[229565]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:41 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:41 compute-1 podman[229589]: 2025-11-25 09:49:41.058882584 +0000 UTC m=+0.035233565 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 09:49:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:41 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57800066a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:49:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:42.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:49:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:42 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005af0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:42.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:42 compute-1 ceph-mon[79643]: pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:43 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:43 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:43 compute-1 ceph-mon[79643]: pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:44.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:44 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57800073b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:44.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:49:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:45 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:45 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:46 compute-1 ceph-mon[79643]: pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:46.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:46 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:46.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:47 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57800073b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:47 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:48.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:48 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005b30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:48 compute-1 ceph-mon[79643]: pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:49:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:48.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:49 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:49 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57800073b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:49 compute-1 podman[229611]: 2025-11-25 09:49:49.456040739 +0000 UTC m=+0.058091626 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:49:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:50.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:50 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5744004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:50 compute-1 ceph-mon[79643]: pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:50.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:51 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005b30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:51 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:52.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:52 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:52 compute-1 ceph-mon[79643]: pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:52.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:53 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f573c004bc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:53 compute-1 kernel: ganesha.nfsd[229263]: segfault at 50 ip 00007f57f491632e sp 00007f57acff8210 error 4 in libntirpc.so.5.8[7f57f48fb000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 25 09:49:53 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:49:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[215693]: 25/11/2025 09:49:53 : epoch 69257b1c : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5774005b50 fd 47 proxy ignored for local
Nov 25 09:49:53 compute-1 systemd[1]: Started Process Core Dump (PID 229637/UID 0).
Nov 25 09:49:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:54.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:54 compute-1 systemd-coredump[229638]: Process 215698 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 63:
                                                    #0  0x00007f57f491632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:49:54 compute-1 systemd[1]: systemd-coredump@5-229637-0.service: Deactivated successfully.
Nov 25 09:49:54 compute-1 rsyslogd[963]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:49:54 compute-1 podman[229645]: 2025-11-25 09:49:54.469342736 +0000 UTC m=+0.021179557 container died 8f85fcb1f978443303d56bdbb25b390182231b6d74e38d1ca35f0618383baf9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:49:54 compute-1 systemd[1]: var-lib-containers-storage-overlay-a1d2d5b51d2d37ea7bef235f8586bbe5b2fb9723f66fe4a15266ab43661cde28-merged.mount: Deactivated successfully.
Nov 25 09:49:54 compute-1 podman[229645]: 2025-11-25 09:49:54.49074361 +0000 UTC m=+0.042580400 container remove 8f85fcb1f978443303d56bdbb25b390182231b6d74e38d1ca35f0618383baf9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:49:54 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:49:54 compute-1 podman[229643]: 2025-11-25 09:49:54.496009125 +0000 UTC m=+0.048376114 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:49:54 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Failed with result 'exit-code'.
Nov 25 09:49:54 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.032s CPU time.
Nov 25 09:49:54 compute-1 ceph-mon[79643]: pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1000117064' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:49:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1000117064' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:49:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:49:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:54.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:49:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:56.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:56 compute-1 ceph-mon[79643]: pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:56.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:58.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094958 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:49:58 compute-1 ceph-mon[79643]: pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:49:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:49:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:58.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/094959 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:50:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:00.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:00 compute-1 ceph-mon[79643]: pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:50:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:50:00 compute-1 ceph-mon[79643]: overall HEALTH_OK
Nov 25 09:50:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:00.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:01 compute-1 sudo[229697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:50:01 compute-1 sudo[229697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:01 compute-1 sudo[229697]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:02.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:02 compute-1 ceph-mon[79643]: pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:50:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:02.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:04.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:04 compute-1 ceph-mon[79643]: pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:50:04 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Scheduled restart job, restart counter is at 6.
Nov 25 09:50:04 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:50:04 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.032s CPU time.
Nov 25 09:50:04 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:50:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:04.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:04 compute-1 podman[229762]: 2025-11-25 09:50:04.885945213 +0000 UTC m=+0.030285129 container create cac41cc44311c86b40bda3b7a2b930fac620c7f97bba216cc7729cc3884ee031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 25 09:50:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5927288b830c9f3aefe58f30aafad8db99304e71b572e49aa55f2a0d5edd1ee/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5927288b830c9f3aefe58f30aafad8db99304e71b572e49aa55f2a0d5edd1ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5927288b830c9f3aefe58f30aafad8db99304e71b572e49aa55f2a0d5edd1ee/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5927288b830c9f3aefe58f30aafad8db99304e71b572e49aa55f2a0d5edd1ee/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.yfzsxe-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:04 compute-1 podman[229762]: 2025-11-25 09:50:04.927990183 +0000 UTC m=+0.072330109 container init cac41cc44311c86b40bda3b7a2b930fac620c7f97bba216cc7729cc3884ee031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:50:04 compute-1 podman[229762]: 2025-11-25 09:50:04.93186688 +0000 UTC m=+0.076206796 container start cac41cc44311c86b40bda3b7a2b930fac620c7f97bba216cc7729cc3884ee031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 09:50:04 compute-1 bash[229762]: cac41cc44311c86b40bda3b7a2b930fac620c7f97bba216cc7729cc3884ee031
Nov 25 09:50:04 compute-1 podman[229762]: 2025-11-25 09:50:04.872175992 +0000 UTC m=+0.016515928 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:50:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:04 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:50:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:04 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:50:04 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:50:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:04 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:50:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:04 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:50:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:04 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:50:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:04 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:50:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:04 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:50:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:04 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:50:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:50:04.996 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:50:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:50:04.997 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:50:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:50:04.997 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:50:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:06.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:06 compute-1 ceph-mon[79643]: pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:50:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:06.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:08.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:08 compute-1 ceph-mon[79643]: pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s
Nov 25 09:50:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:08.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:10.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:10 compute-1 ceph-mon[79643]: pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 0 op/s
Nov 25 09:50:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:10.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:11 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:50:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:11 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:50:11 compute-1 podman[229820]: 2025-11-25 09:50:11.780865758 +0000 UTC m=+0.035058736 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:50:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:12.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:12 compute-1 ceph-mon[79643]: pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 25 09:50:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:12.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095013 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:50:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [NOTICE] 328/095013 (4) : haproxy version is 2.3.17-d1c9119
Nov 25 09:50:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [NOTICE] 328/095013 (4) : path to executable is /usr/local/sbin/haproxy
Nov 25 09:50:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [ALERT] 328/095013 (4) : backend 'backend' has no server available!
Nov 25 09:50:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:14.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095014 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:50:14 compute-1 ceph-mon[79643]: pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 25 09:50:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:50:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:14.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:50:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.048 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.049 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.066 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.066 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.066 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.073 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.073 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.074 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.074 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.074 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.074 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.074 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.096 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.097 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.097 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.097 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.097 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:50:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:16.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:16 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:50:16 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4025101835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.443 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.635 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.636 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5326MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.636 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.637 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.676 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.676 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:50:16 compute-1 nova_compute[228683]: 2025-11-25 09:50:16.689 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:50:16 compute-1 ceph-mon[79643]: pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 25 09:50:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/4025101835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:16.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:50:17 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3025817659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:50:17 compute-1 nova_compute[228683]: 2025-11-25 09:50:17.025 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:50:17 compute-1 nova_compute[228683]: 2025-11-25 09:50:17.030 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:50:17 compute-1 nova_compute[228683]: 2025-11-25 09:50:17.055 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:50:17 compute-1 nova_compute[228683]: 2025-11-25 09:50:17.056 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:50:17 compute-1 nova_compute[228683]: 2025-11-25 09:50:17.057 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb4000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:17 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea4003b00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3765179905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3025817659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2335408978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:17 compute-1 nova_compute[228683]: 2025-11-25 09:50:17.877 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:17 compute-1 nova_compute[228683]: 2025-11-25 09:50:17.878 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:18.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:18 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb0001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:18 compute-1 ceph-mon[79643]: pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:50:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1890353352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3082497368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:18.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095019 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:50:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:19 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:19 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:19 compute-1 podman[229899]: 2025-11-25 09:50:19.801009861 +0000 UTC m=+0.056375070 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 09:50:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:50:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:20.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:50:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:20 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea4003ab0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:20 compute-1 ceph-mon[79643]: pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 25 09:50:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:20.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:20 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:50:20.959 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:50:20 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:50:20.960 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:50:20 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:50:20.960 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ad0cdb86-b3c6-44c6-a890-1db2efa57d2b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:50:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:21 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb00025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:21 compute-1 sudo[229923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:50:21 compute-1 sudo[229923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:21 compute-1 sudo[229923]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:21 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:22.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:22 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:22 compute-1 ceph-mon[79643]: pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:50:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:22.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:22 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:50:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:23 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea4005fb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:23 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb00025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:24.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:24 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:24 compute-1 podman[229950]: 2025-11-25 09:50:24.785463681 +0000 UTC m=+0.036385515 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:50:24 compute-1 ceph-mon[79643]: pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Nov 25 09:50:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:24.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:25 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:25 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea4005fb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:25 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:50:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:25 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:50:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:26.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:26 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb00025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:26 compute-1 ceph-mon[79643]: pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Nov 25 09:50:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:26.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:27 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:27 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:28.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:28 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea4006cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:28 compute-1 ceph-mon[79643]: pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 25 09:50:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:28.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:28 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:50:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:29 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb0003a50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:29 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:30.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:30 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:30 compute-1 ceph-mon[79643]: pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:50:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:50:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:50:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:30.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:50:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:31 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea4006cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:31 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb0003a50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:31 compute-1 ceph-mon[79643]: pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:50:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:32.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:32 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:32.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:33 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:33 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:34.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:34 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb0003a50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:34 compute-1 ceph-mon[79643]: pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:50:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:34.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:35 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea4006cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:35 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea80064b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095035 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:50:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:36.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:36 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea80064b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:36 compute-1 ceph-mon[79643]: pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:50:36 compute-1 sudo[229974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:50:36 compute-1 sudo[229974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:36 compute-1 sudo[229974]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:36 compute-1 sudo[229999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:50:36 compute-1 sudo[229999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:36.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:37 compute-1 sudo[229999]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:37 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea80064b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:37 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb4001910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:37 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:50:37 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:50:37 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:50:37 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:50:37 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:50:37 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:50:37 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:50:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:38.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:38 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb4001910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:38 compute-1 ceph-mon[79643]: pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:50:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:38.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:39 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea4006cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:39 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea80064b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:40.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:40 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea80064b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:40 compute-1 sudo[230054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:50:40 compute-1 sudo[230054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:40 compute-1 sudo[230054]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:40 compute-1 ceph-mon[79643]: pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:50:40 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:50:40 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:50:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:40.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:41 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb4001910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:41 compute-1 sudo[230079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:50:41 compute-1 sudo[230079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:41 compute-1 sudo[230079]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:41 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea4006cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:42.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:42 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb0004b50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:42 compute-1 ceph-mon[79643]: pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:50:42 compute-1 podman[230105]: 2025-11-25 09:50:42.784071832 +0000 UTC m=+0.034316827 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:50:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:42.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:43 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea80064b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:43 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb4008dc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:44.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:44 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea4006cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:44 compute-1 ceph-mon[79643]: pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:50:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:44.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:45 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb0004b50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:45 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea80064b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:50:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:46.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:46 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb4008dc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:46 compute-1 ceph-mon[79643]: pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:50:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:46.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:47 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea4006cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:47 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb0004b50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:48.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:48 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea80064b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:48 compute-1 ceph-mon[79643]: pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:50:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:48.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:49 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea80064b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:49 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ea4006cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:50.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:50 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb0004b50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:50 compute-1 ceph-mon[79643]: pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:50:50 compute-1 podman[230125]: 2025-11-25 09:50:50.800879248 +0000 UTC m=+0.054247215 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 09:50:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:50.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:51 compute-1 kernel: ganesha.nfsd[229890]: segfault at 50 ip 00007f4f63b5432e sp 00007f4f277fd210 error 4 in libntirpc.so.5.8[7f4f63b39000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 25 09:50:51 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:50:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[229774]: 25/11/2025 09:50:51 : epoch 69257bcc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4eb0004b50 fd 37 proxy ignored for local
Nov 25 09:50:51 compute-1 systemd[1]: Started Process Core Dump (PID 230149/UID 0).
Nov 25 09:50:52 compute-1 systemd-coredump[230150]: Process 229778 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 47:
                                                    #0  0x00007f4f63b5432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:50:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:52 compute-1 systemd[1]: systemd-coredump@6-230149-0.service: Deactivated successfully.
Nov 25 09:50:52 compute-1 systemd[1]: systemd-coredump@6-230149-0.service: Consumed 1.020s CPU time.
Nov 25 09:50:52 compute-1 podman[230156]: 2025-11-25 09:50:52.216772664 +0000 UTC m=+0.017307053 container died cac41cc44311c86b40bda3b7a2b930fac620c7f97bba216cc7729cc3884ee031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:50:52 compute-1 systemd[1]: var-lib-containers-storage-overlay-c5927288b830c9f3aefe58f30aafad8db99304e71b572e49aa55f2a0d5edd1ee-merged.mount: Deactivated successfully.
Nov 25 09:50:52 compute-1 podman[230156]: 2025-11-25 09:50:52.234450556 +0000 UTC m=+0.034984946 container remove cac41cc44311c86b40bda3b7a2b930fac620c7f97bba216cc7729cc3884ee031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:50:52 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:50:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:52.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:52 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Failed with result 'exit-code'.
Nov 25 09:50:52 compute-1 ceph-mon[79643]: pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:50:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:52.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:54.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:54 compute-1 ceph-mon[79643]: pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:50:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1004775845' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:50:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1004775845' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:50:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:54.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:55 compute-1 podman[230191]: 2025-11-25 09:50:55.783934387 +0000 UTC m=+0.038677857 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 25 09:50:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:56.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:56 compute-1 ceph-mon[79643]: pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:50:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:56.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095057 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:50:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.869231) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257869260, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1351, "num_deletes": 255, "total_data_size": 3278039, "memory_usage": 3328080, "flush_reason": "Manual Compaction"}
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257874324, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 2128333, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18795, "largest_seqno": 20141, "table_properties": {"data_size": 2122606, "index_size": 3054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12091, "raw_average_key_size": 18, "raw_value_size": 2110976, "raw_average_value_size": 3308, "num_data_blocks": 137, "num_entries": 638, "num_filter_entries": 638, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064148, "oldest_key_time": 1764064148, "file_creation_time": 1764064257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 5116 microseconds, and 3984 cpu microseconds.
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.874348) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 2128333 bytes OK
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.874359) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.874700) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.874709) EVENT_LOG_v1 {"time_micros": 1764064257874706, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.874718) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3271579, prev total WAL file size 3271579, number of live WAL files 2.
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.875266) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(2078KB)], [33(11MB)]
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257875298, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 14214129, "oldest_snapshot_seqno": -1}
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5088 keys, 13751829 bytes, temperature: kUnknown
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257907988, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 13751829, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13716168, "index_size": 21855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 128523, "raw_average_key_size": 25, "raw_value_size": 13622344, "raw_average_value_size": 2677, "num_data_blocks": 904, "num_entries": 5088, "num_filter_entries": 5088, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764064257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.908146) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13751829 bytes
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.908536) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 434.1 rd, 420.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.5 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(13.1) write-amplify(6.5) OK, records in: 5612, records dropped: 524 output_compression: NoCompression
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.908551) EVENT_LOG_v1 {"time_micros": 1764064257908545, "job": 18, "event": "compaction_finished", "compaction_time_micros": 32744, "compaction_time_cpu_micros": 19176, "output_level": 6, "num_output_files": 1, "total_output_size": 13751829, "num_input_records": 5612, "num_output_records": 5088, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257908879, "job": 18, "event": "table_file_deletion", "file_number": 35}
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257910420, "job": 18, "event": "table_file_deletion", "file_number": 33}
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.875201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.910485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.910488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.910489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.910491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:50:57 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:50:57.910492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:50:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:58.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:58 compute-1 ceph-mon[79643]: pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:50:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:50:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:58.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:00.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:00 compute-1 ceph-mon[79643]: pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:51:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:00.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:01 compute-1 sudo[230211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:51:01 compute-1 sudo[230211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:01 compute-1 sudo[230211]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:02.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:02 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Scheduled restart job, restart counter is at 7.
Nov 25 09:51:02 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:51:02 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:51:02 compute-1 ceph-mon[79643]: pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:51:02 compute-1 podman[230274]: 2025-11-25 09:51:02.633513262 +0000 UTC m=+0.025451486 container create e6f382cb21db79c12103afe694664930135f74bc2fa665289e93d8c622ca05d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 09:51:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/440786a071b7928da9a590819548dad6e8adc54982ca31d0984c5b607571dd11/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/440786a071b7928da9a590819548dad6e8adc54982ca31d0984c5b607571dd11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/440786a071b7928da9a590819548dad6e8adc54982ca31d0984c5b607571dd11/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/440786a071b7928da9a590819548dad6e8adc54982ca31d0984c5b607571dd11/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.yfzsxe-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:02 compute-1 podman[230274]: 2025-11-25 09:51:02.671055768 +0000 UTC m=+0.062994012 container init e6f382cb21db79c12103afe694664930135f74bc2fa665289e93d8c622ca05d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:51:02 compute-1 podman[230274]: 2025-11-25 09:51:02.677388245 +0000 UTC m=+0.069326469 container start e6f382cb21db79c12103afe694664930135f74bc2fa665289e93d8c622ca05d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 25 09:51:02 compute-1 bash[230274]: e6f382cb21db79c12103afe694664930135f74bc2fa665289e93d8c622ca05d5
Nov 25 09:51:02 compute-1 podman[230274]: 2025-11-25 09:51:02.62332659 +0000 UTC m=+0.015264834 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:51:02 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:51:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:02 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:51:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:02 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:51:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:02 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:51:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:02 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:51:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:02 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:51:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:02 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:51:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:02 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:51:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:02 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:51:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:02.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:04.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:04 compute-1 ceph-mon[79643]: pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:51:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:04.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:51:04.997 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:51:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:51:04.997 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:51:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:51:04.997 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:51:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:06.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:06 compute-1 ceph-mon[79643]: pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:51:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:06.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:08.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:08 compute-1 ceph-mon[79643]: pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:51:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:08 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:51:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:08 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:51:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:08.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:10.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:10 compute-1 ceph-mon[79643]: pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:51:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:10.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:12.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:12 compute-1 ceph-mon[79643]: pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:51:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:12.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:13 compute-1 podman[230335]: 2025-11-25 09:51:13.783068468 +0000 UTC m=+0.035413243 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:51:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:14.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:14 compute-1 ceph-mon[79643]: pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:51:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:14 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:51:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:14.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:15 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8948000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:15 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8948000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:51:15 compute-1 nova_compute[228683]: 2025-11-25 09:51:15.889 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:15 compute-1 nova_compute[228683]: 2025-11-25 09:51:15.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:15 compute-1 nova_compute[228683]: 2025-11-25 09:51:15.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:15 compute-1 nova_compute[228683]: 2025-11-25 09:51:15.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:51:15 compute-1 nova_compute[228683]: 2025-11-25 09:51:15.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:15 compute-1 nova_compute[228683]: 2025-11-25 09:51:15.922 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:51:15 compute-1 nova_compute[228683]: 2025-11-25 09:51:15.922 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:51:15 compute-1 nova_compute[228683]: 2025-11-25 09:51:15.922 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:51:15 compute-1 nova_compute[228683]: 2025-11-25 09:51:15.922 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:51:15 compute-1 nova_compute[228683]: 2025-11-25 09:51:15.923 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:51:16 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:51:16 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1922449101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:16 compute-1 nova_compute[228683]: 2025-11-25 09:51:16.259 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:51:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:16.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:16 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89440013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:16 compute-1 nova_compute[228683]: 2025-11-25 09:51:16.451 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:51:16 compute-1 nova_compute[228683]: 2025-11-25 09:51:16.452 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5260MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:51:16 compute-1 nova_compute[228683]: 2025-11-25 09:51:16.452 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:51:16 compute-1 nova_compute[228683]: 2025-11-25 09:51:16.452 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:51:16 compute-1 nova_compute[228683]: 2025-11-25 09:51:16.513 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:51:16 compute-1 nova_compute[228683]: 2025-11-25 09:51:16.513 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:51:16 compute-1 nova_compute[228683]: 2025-11-25 09:51:16.532 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:51:16 compute-1 ceph-mon[79643]: pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:51:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3595711511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1922449101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/4009478358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:16 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:51:16 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1680067701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:16 compute-1 nova_compute[228683]: 2025-11-25 09:51:16.875 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:51:16 compute-1 nova_compute[228683]: 2025-11-25 09:51:16.878 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:51:16 compute-1 nova_compute[228683]: 2025-11-25 09:51:16.893 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:51:16 compute-1 nova_compute[228683]: 2025-11-25 09:51:16.894 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:51:16 compute-1 nova_compute[228683]: 2025-11-25 09:51:16.894 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:51:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:16.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095117 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:51:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:17 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944002090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:17 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944002b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1680067701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:17 compute-1 nova_compute[228683]: 2025-11-25 09:51:17.896 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:17 compute-1 nova_compute[228683]: 2025-11-25 09:51:17.896 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:51:17 compute-1 nova_compute[228683]: 2025-11-25 09:51:17.896 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:51:17 compute-1 nova_compute[228683]: 2025-11-25 09:51:17.910 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:51:17 compute-1 nova_compute[228683]: 2025-11-25 09:51:17.910 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:17 compute-1 nova_compute[228683]: 2025-11-25 09:51:17.910 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:17 compute-1 nova_compute[228683]: 2025-11-25 09:51:17.910 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:18.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:18 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8948002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:18 compute-1 ceph-mon[79643]: pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 25 09:51:18 compute-1 nova_compute[228683]: 2025-11-25 09:51:18.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:18.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:19 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944003130 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:19 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c0025f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:20.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:20 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944003130 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:20 compute-1 ceph-mon[79643]: pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 25 09:51:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1315945080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2179395092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:20.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:21 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8948002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:21 compute-1 sudo[230416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:51:21 compute-1 sudo[230416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:21 compute-1 sudo[230416]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:21 compute-1 podman[230440]: 2025-11-25 09:51:21.369502226 +0000 UTC m=+0.059621365 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 09:51:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:21 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:22.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:22 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c003110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:22 compute-1 ceph-mon[79643]: pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 25 09:51:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:22.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:23 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c003110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:23 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c003110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:24.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:24 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c003110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:24 compute-1 ceph-mon[79643]: pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:51:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:24.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:25 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944004760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:25 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944004760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:26.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:26 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8948002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:26 compute-1 ceph-mon[79643]: pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:51:26 compute-1 podman[230467]: 2025-11-25 09:51:26.783171756 +0000 UTC m=+0.037019772 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:51:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:26.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:27 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c003110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:27 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c003110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:28.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:28 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944005470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:28 compute-1 ceph-mon[79643]: pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:51:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:28.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:29 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89480095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:29 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944005470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:30.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:30 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c005170 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:30 compute-1 ceph-mon[79643]: pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:51:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:30.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:31 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944006180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:31 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89480095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:32.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:32 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944006180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:32 compute-1 ceph-mon[79643]: pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:32.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:33 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944006180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:33 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c005170 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:34.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:34 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f894800a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:34 compute-1 ceph-mon[79643]: pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:34.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:35 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944006180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:35 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944006180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:36.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:36 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c005170 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:36 compute-1 ceph-mon[79643]: pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:36.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:37 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c005170 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:37 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:38.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:38 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:38 compute-1 ceph-mon[79643]: pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:51:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:38.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:39 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c005170 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:39 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c005170 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:40.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:40 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c005170 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:40 compute-1 sudo[230491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:51:40 compute-1 sudo[230491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:40 compute-1 sudo[230491]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:40 compute-1 sudo[230516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:51:40 compute-1 sudo[230516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:40 compute-1 ceph-mon[79643]: pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:40.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:40 compute-1 sudo[230516]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:41 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f894800a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:41 compute-1 sudo[230572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:51:41 compute-1 sudo[230572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:41 compute-1 sudo[230572]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:41 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:41 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:51:41 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:51:41 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:51:41 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:51:41 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:51:41 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:51:41 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:51:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:42.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:42 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c005170 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:42 compute-1 ceph-mon[79643]: pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:42.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:43 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c005170 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:43 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f894800adb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:44.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:44 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:44 compute-1 sudo[230598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:51:44 compute-1 sudo[230598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:44 compute-1 sudo[230598]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:44 compute-1 podman[230622]: 2025-11-25 09:51:44.565590276 +0000 UTC m=+0.035211574 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 09:51:44 compute-1 ceph-mon[79643]: pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:51:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:51:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:44.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:45 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f893c005170 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:45 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f894800adb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:51:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.002000020s ======
Nov 25 09:51:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - - [25/Nov/2025:09:51:46.108 +0000] "GET /swift/info HTTP/1.1" 200 539 - "python-urllib3/1.26.5" - latency=0.002000020s
Nov 25 09:51:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:46.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:46 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f894800adb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:46 compute-1 ceph-mon[79643]: pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.748605) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306748624, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 735, "num_deletes": 251, "total_data_size": 1405692, "memory_usage": 1420368, "flush_reason": "Manual Compaction"}
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306752198, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 928493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20146, "largest_seqno": 20876, "table_properties": {"data_size": 924953, "index_size": 1384, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8273, "raw_average_key_size": 19, "raw_value_size": 917791, "raw_average_value_size": 2159, "num_data_blocks": 61, "num_entries": 425, "num_filter_entries": 425, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064258, "oldest_key_time": 1764064258, "file_creation_time": 1764064306, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 3612 microseconds, and 2345 cpu microseconds.
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.752217) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 928493 bytes OK
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.752227) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.752540) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.752549) EVENT_LOG_v1 {"time_micros": 1764064306752546, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.752741) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1401748, prev total WAL file size 1401748, number of live WAL files 2.
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.753077) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(906KB)], [36(13MB)]
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306753098, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 14680322, "oldest_snapshot_seqno": -1}
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4997 keys, 12486074 bytes, temperature: kUnknown
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306780940, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 12486074, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12452033, "index_size": 20427, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 127284, "raw_average_key_size": 25, "raw_value_size": 12360728, "raw_average_value_size": 2473, "num_data_blocks": 840, "num_entries": 4997, "num_filter_entries": 4997, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764064306, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.781124) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12486074 bytes
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.781550) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 525.9 rd, 447.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.1 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(29.3) write-amplify(13.4) OK, records in: 5513, records dropped: 516 output_compression: NoCompression
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.781570) EVENT_LOG_v1 {"time_micros": 1764064306781565, "job": 20, "event": "compaction_finished", "compaction_time_micros": 27917, "compaction_time_cpu_micros": 18220, "output_level": 6, "num_output_files": 1, "total_output_size": 12486074, "num_input_records": 5513, "num_output_records": 4997, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306781844, "job": 20, "event": "table_file_deletion", "file_number": 38}
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306783663, "job": 20, "event": "table_file_deletion", "file_number": 36}
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.753011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.783702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.783705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.783706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.783707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:51:46 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:51:46.783709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:51:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:46.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:47 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8944007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:47 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8960002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:48.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:48 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f894800adb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:48 compute-1 ceph-mon[79643]: pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:51:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:51:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:48.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:51:49 compute-1 kernel: ganesha.nfsd[230352]: segfault at 50 ip 00007f89f825532e sp 00007f89c77fd210 error 4 in libntirpc.so.5.8[7f89f823a000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 25 09:51:49 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:51:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230287]: 25/11/2025 09:51:49 : epoch 69257c06 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f894800adb0 fd 38 proxy ignored for local
Nov 25 09:51:49 compute-1 systemd[1]: Started Process Core Dump (PID 230642/UID 0).
Nov 25 09:51:49 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Nov 25 09:51:50 compute-1 systemd-coredump[230643]: Process 230291 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 41:
                                                    #0  0x00007f89f825532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:51:50 compute-1 systemd[1]: systemd-coredump@7-230642-0.service: Deactivated successfully.
Nov 25 09:51:50 compute-1 systemd[1]: systemd-coredump@7-230642-0.service: Consumed 1.026s CPU time.
Nov 25 09:51:50 compute-1 podman[230649]: 2025-11-25 09:51:50.263658377 +0000 UTC m=+0.018742260 container died e6f382cb21db79c12103afe694664930135f74bc2fa665289e93d8c622ca05d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:51:50 compute-1 systemd[1]: var-lib-containers-storage-overlay-440786a071b7928da9a590819548dad6e8adc54982ca31d0984c5b607571dd11-merged.mount: Deactivated successfully.
Nov 25 09:51:50 compute-1 podman[230649]: 2025-11-25 09:51:50.282291641 +0000 UTC m=+0.037375514 container remove e6f382cb21db79c12103afe694664930135f74bc2fa665289e93d8c622ca05d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:51:50 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:51:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:50.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:50 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Failed with result 'exit-code'.
Nov 25 09:51:50 compute-1 ceph-mon[79643]: pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:50 compute-1 ceph-mon[79643]: osdmap e137: 3 total, 3 up, 3 in
Nov 25 09:51:50 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Nov 25 09:51:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:50.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:51 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Nov 25 09:51:51 compute-1 ceph-mon[79643]: osdmap e138: 3 total, 3 up, 3 in
Nov 25 09:51:51 compute-1 podman[230685]: 2025-11-25 09:51:51.803092522 +0000 UTC m=+0.057179622 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:51:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:52.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:52 compute-1 ceph-mon[79643]: pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Nov 25 09:51:52 compute-1 ceph-mon[79643]: osdmap e139: 3 total, 3 up, 3 in
Nov 25 09:51:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Nov 25 09:51:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Nov 25 09:51:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:52.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:53 compute-1 ceph-mon[79643]: osdmap e140: 3 total, 3 up, 3 in
Nov 25 09:51:53 compute-1 ceph-mon[79643]: osdmap e141: 3 total, 3 up, 3 in
Nov 25 09:51:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:54.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:54 compute-1 ceph-mon[79643]: pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 3.0 KiB/s wr, 26 op/s
Nov 25 09:51:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/3337901668' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:51:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/3337901668' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:51:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:54.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095155 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:51:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:56.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:56 compute-1 ceph-mon[79643]: pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.4 KiB/s wr, 20 op/s
Nov 25 09:51:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:56.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:57 compute-1 podman[230712]: 2025-11-25 09:51:57.786953622 +0000 UTC m=+0.040311516 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:51:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:58.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:58 compute-1 ceph-mon[79643]: pgmap v619: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 6.8 MiB/s wr, 47 op/s
Nov 25 09:51:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:51:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:58.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:00.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:00 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Scheduled restart job, restart counter is at 8.
Nov 25 09:52:00 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:52:00 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:52:00 compute-1 podman[230767]: 2025-11-25 09:52:00.649819755 +0000 UTC m=+0.026573752 container create b1d43a07869ef363b5536df3d09cfa3ca82ffb4b1e5c8c523eee9fb43bf2425b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:52:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56eca2dfc04db3e8f5e2bfc8318ddc076257a2f2171e3ea9f42eb8191218c987/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56eca2dfc04db3e8f5e2bfc8318ddc076257a2f2171e3ea9f42eb8191218c987/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56eca2dfc04db3e8f5e2bfc8318ddc076257a2f2171e3ea9f42eb8191218c987/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56eca2dfc04db3e8f5e2bfc8318ddc076257a2f2171e3ea9f42eb8191218c987/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.yfzsxe-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:00 compute-1 podman[230767]: 2025-11-25 09:52:00.68360544 +0000 UTC m=+0.060359447 container init b1d43a07869ef363b5536df3d09cfa3ca82ffb4b1e5c8c523eee9fb43bf2425b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 09:52:00 compute-1 podman[230767]: 2025-11-25 09:52:00.689157426 +0000 UTC m=+0.065911422 container start b1d43a07869ef363b5536df3d09cfa3ca82ffb4b1e5c8c523eee9fb43bf2425b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:52:00 compute-1 bash[230767]: b1d43a07869ef363b5536df3d09cfa3ca82ffb4b1e5c8c523eee9fb43bf2425b
Nov 25 09:52:00 compute-1 podman[230767]: 2025-11-25 09:52:00.639123915 +0000 UTC m=+0.015877931 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:52:00 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:52:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:00 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:52:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:00 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:52:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:00 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:52:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:00 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:52:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:00 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:52:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:00 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:52:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:00 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:52:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:00 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:52:00 compute-1 ceph-mon[79643]: pgmap v620: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.4 MiB/s wr, 37 op/s
Nov 25 09:52:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:52:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:00.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:01 compute-1 sudo[230822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:52:01 compute-1 sudo[230822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:01 compute-1 sudo[230822]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:52:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:02.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:52:02 compute-1 ceph-mon[79643]: pgmap v621: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 4.7 MiB/s wr, 33 op/s
Nov 25 09:52:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:02.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:04.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:04 compute-1 ceph-mon[79643]: pgmap v622: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.9 MiB/s wr, 27 op/s
Nov 25 09:52:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:04.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:52:04.998 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:52:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:52:04.998 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:52:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:52:04.998 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:52:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:06.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095206 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:52:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:06 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:52:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:06 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:52:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:06 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 09:52:06 compute-1 ceph-mon[79643]: pgmap v623: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.4 MiB/s wr, 24 op/s
Nov 25 09:52:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:06.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:07 compute-1 ceph-mon[79643]: pgmap v624: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.4 MiB/s wr, 25 op/s
Nov 25 09:52:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:08.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:08 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:52:08.497 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:52:08 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:52:08.498 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:52:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:08.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:10.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:10 compute-1 ceph-mon[79643]: pgmap v625: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Nov 25 09:52:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:52:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:10.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:52:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:10 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:52:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:10 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:52:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:10 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:52:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:11 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 09:52:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:11 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:52:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:11 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:52:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:11 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:52:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:12.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:12 compute-1 ceph-mon[79643]: pgmap v626: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:52:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:12.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:14.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:14 compute-1 ceph-mon[79643]: pgmap v627: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Nov 25 09:52:14 compute-1 podman[230853]: 2025-11-25 09:52:14.804964935 +0000 UTC m=+0.059859344 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:52:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:14.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:52:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:16.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:16 compute-1 ceph-mon[79643]: pgmap v628: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Nov 25 09:52:16 compute-1 nova_compute[228683]: 2025-11-25 09:52:16.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:16 compute-1 nova_compute[228683]: 2025-11-25 09:52:16.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:16 compute-1 nova_compute[228683]: 2025-11-25 09:52:16.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:16 compute-1 nova_compute[228683]: 2025-11-25 09:52:16.910 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:52:16 compute-1 nova_compute[228683]: 2025-11-25 09:52:16.910 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:52:16 compute-1 nova_compute[228683]: 2025-11-25 09:52:16.910 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:52:16 compute-1 nova_compute[228683]: 2025-11-25 09:52:16.910 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:52:16 compute-1 nova_compute[228683]: 2025-11-25 09:52:16.911 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:52:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:16.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:52:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:52:17 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2106931284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:17 compute-1 nova_compute[228683]: 2025-11-25 09:52:17.248 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:52:17 compute-1 nova_compute[228683]: 2025-11-25 09:52:17.428 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:52:17 compute-1 nova_compute[228683]: 2025-11-25 09:52:17.429 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5222MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:52:17 compute-1 nova_compute[228683]: 2025-11-25 09:52:17.429 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:52:17 compute-1 nova_compute[228683]: 2025-11-25 09:52:17.429 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:52:17 compute-1 nova_compute[228683]: 2025-11-25 09:52:17.484 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:52:17 compute-1 nova_compute[228683]: 2025-11-25 09:52:17.485 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:52:17 compute-1 nova_compute[228683]: 2025-11-25 09:52:17.498 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:52:17 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:52:17.500 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ad0cdb86-b3c6-44c6-a890-1db2efa57d2b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:52:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2106931284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:52:17 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2717398405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:17 compute-1 nova_compute[228683]: 2025-11-25 09:52:17.829 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:52:17 compute-1 nova_compute[228683]: 2025-11-25 09:52:17.833 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:52:17 compute-1 nova_compute[228683]: 2025-11-25 09:52:17.847 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:52:17 compute-1 nova_compute[228683]: 2025-11-25 09:52:17.848 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:52:17 compute-1 nova_compute[228683]: 2025-11-25 09:52:17.848 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:52:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:18.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:18 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af4001e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:18 compute-1 ceph-mon[79643]: pgmap v629: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:52:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1419400791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2717398405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3747932840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:18 compute-1 nova_compute[228683]: 2025-11-25 09:52:18.844 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:18 compute-1 nova_compute[228683]: 2025-11-25 09:52:18.845 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:18 compute-1 nova_compute[228683]: 2025-11-25 09:52:18.845 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:52:18 compute-1 nova_compute[228683]: 2025-11-25 09:52:18.846 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:52:18 compute-1 nova_compute[228683]: 2025-11-25 09:52:18.856 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:52:18 compute-1 nova_compute[228683]: 2025-11-25 09:52:18.856 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:18 compute-1 nova_compute[228683]: 2025-11-25 09:52:18.856 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:52:18 compute-1 nova_compute[228683]: 2025-11-25 09:52:18.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:18 compute-1 nova_compute[228683]: 2025-11-25 09:52:18.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:18.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:19 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0013a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:19 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0013a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095219 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:52:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [ALERT] 328/095219 (4) : backend 'backend' has no server available!
Nov 25 09:52:19 compute-1 nova_compute[228683]: 2025-11-25 09:52:19.889 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:19 compute-1 nova_compute[228683]: 2025-11-25 09:52:19.901 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:20 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:52:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:20 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:52:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:20.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:20 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:20 compute-1 ceph-mon[79643]: pgmap v630: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Nov 25 09:52:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:20.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095221 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:52:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:21 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:21 compute-1 sudo[230933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:52:21 compute-1 sudo[230933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:21 compute-1 sudo[230933]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:21 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc002960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4107048521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3764020354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:52:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:22.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:52:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:22 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc002960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:22 compute-1 ceph-mon[79643]: pgmap v631: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:52:22 compute-1 podman[230958]: 2025-11-25 09:52:22.799551135 +0000 UTC m=+0.054255571 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:52:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:22.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:23 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:23 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:24.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:24 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:24 compute-1 ceph-mon[79643]: pgmap v632: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:52:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:24.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:25 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc003910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:25 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc003910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:25 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:52:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:25 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:52:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:26.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:26 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc003910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:26 compute-1 ceph-mon[79643]: pgmap v633: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:52:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:26.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:27 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af4002930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:27 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af4002930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:27 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3604582290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:28.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:28 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc004b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:28 compute-1 ceph-mon[79643]: pgmap v634: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 25 09:52:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:28 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:52:28 compute-1 podman[230986]: 2025-11-25 09:52:28.784582049 +0000 UTC m=+0.038045144 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 09:52:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:52:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 7032 writes, 27K keys, 7032 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7032 writes, 1507 syncs, 4.67 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 646 writes, 1251 keys, 646 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s
                                           Interval WAL: 646 writes, 310 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 09:52:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:28.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:29 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:29 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:30.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:30 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:30 compute-1 ceph-mon[79643]: pgmap v635: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 511 B/s wr, 2 op/s
Nov 25 09:52:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:52:30 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Nov 25 09:52:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:30.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:31 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:31 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:31 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Nov 25 09:52:31 compute-1 ceph-mon[79643]: osdmap e142: 3 total, 3 up, 3 in
Nov 25 09:52:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:31 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:52:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:32.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:32 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:32 compute-1 ceph-mon[79643]: pgmap v637: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 818 B/s wr, 10 op/s
Nov 25 09:52:32 compute-1 ceph-mon[79643]: osdmap e143: 3 total, 3 up, 3 in
Nov 25 09:52:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:33.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:33 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:33 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:33 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1340971977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:52:33 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3791333604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:52:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:34.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:34 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:34 compute-1 ceph-mon[79643]: pgmap v639: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1023 B/s wr, 13 op/s
Nov 25 09:52:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:34 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:52:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:34 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:52:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095234 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:52:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:35.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:35 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:35 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:36.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:36 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:36 compute-1 ceph-mon[79643]: pgmap v640: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 511 B/s wr, 11 op/s
Nov 25 09:52:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:37.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:37 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:37 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:37 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:52:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Nov 25 09:52:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:38.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:38 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:38 compute-1 ceph-mon[79643]: pgmap v641: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.7 MiB/s wr, 109 op/s
Nov 25 09:52:38 compute-1 ceph-mon[79643]: osdmap e144: 3 total, 3 up, 3 in
Nov 25 09:52:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:39.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:39 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:39 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc004b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095239 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:52:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:40.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:40 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:40 compute-1 ceph-mon[79643]: pgmap v643: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 98 op/s
Nov 25 09:52:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:41.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:41 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:41 compute-1 sudo[231010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:52:41 compute-1 sudo[231010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:41 compute-1 sudo[231010]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:41 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:42.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:42 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0058a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:42 compute-1 ceph-mon[79643]: pgmap v644: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 25 09:52:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:43.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:43 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:43 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:44.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:44 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:44 compute-1 ceph-mon[79643]: pgmap v645: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Nov 25 09:52:44 compute-1 sudo[231036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:52:44 compute-1 sudo[231036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:44 compute-1 sudo[231036]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:44 compute-1 sudo[231061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 25 09:52:44 compute-1 sudo[231061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:44 compute-1 podman[231098]: 2025-11-25 09:52:44.944395833 +0000 UTC m=+0.037583794 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:52:44 compute-1 sudo[231061]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:45.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:45 compute-1 sudo[231120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:52:45 compute-1 sudo[231120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:45 compute-1 sudo[231120]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:45 compute-1 sudo[231145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:52:45 compute-1 sudo[231145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:45 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0058a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:45 compute-1 sudo[231145]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:45 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:52:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:52:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:52:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:52:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:52:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:52:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:46.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:46 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0058a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:46 compute-1 ceph-mon[79643]: pgmap v646: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Nov 25 09:52:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:47.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:47 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:47 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:52:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:48.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:52:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:48 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:48 compute-1 ceph-mon[79643]: pgmap v647: 337 pgs: 337 active+clean; 109 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.4 MiB/s wr, 94 op/s
Nov 25 09:52:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:52:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:49.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:52:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:49 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:49 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:49 compute-1 sudo[231204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:52:49 compute-1 sudo[231204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:49 compute-1 sudo[231204]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:50.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:50 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:50 compute-1 ceph-mon[79643]: pgmap v648: 337 pgs: 337 active+clean; 109 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 82 op/s
Nov 25 09:52:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:51.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:51 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:51 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:52.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:52 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:52 compute-1 ceph-mon[79643]: pgmap v649: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 102 op/s
Nov 25 09:52:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:53.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:53 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:53 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b14003bf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:53 compute-1 podman[231231]: 2025-11-25 09:52:53.800373973 +0000 UTC m=+0.055251634 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 09:52:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 09:52:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1195725717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:52:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 09:52:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1195725717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:52:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:54.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:54 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:54 compute-1 ceph-mon[79643]: pgmap v650: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:52:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1195725717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:52:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1195725717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:52:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:55.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:55 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:55 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:56.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:56 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b14004710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:56 compute-1 ceph-mon[79643]: pgmap v651: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:52:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:57.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:57 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:52:57 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3982 writes, 21K keys, 3982 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s
                                           Cumulative WAL: 3982 writes, 3982 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1476 writes, 7160 keys, 1476 commit groups, 1.0 writes per commit group, ingest: 16.93 MB, 0.03 MB/s
                                           Interval WAL: 1476 writes, 1476 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    412.3      0.08              0.05        10    0.008       0      0       0.0       0.0
                                             L6      1/0   11.91 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    500.4    422.5      0.26              0.16         9    0.029     42K   4816       0.0       0.0
                                            Sum      1/0   11.91 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4    385.2    420.1      0.34              0.21        19    0.018     42K   4816       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.4    408.2    413.1      0.15              0.10         8    0.019     22K   2555       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    500.4    422.5      0.26              0.16         9    0.029     42K   4816       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    416.7      0.08              0.05         9    0.009       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.032, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.12 MB/s write, 0.13 GB read, 0.11 MB/s read, 0.3 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633d9fc7350#2 capacity: 304.00 MB usage: 9.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(487,8.91 MB,2.93101%) FilterBlock(19,125.17 KB,0.0402099%) IndexBlock(19,239.66 KB,0.0769866%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 09:52:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:57 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:57 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:58.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:58 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:58 compute-1 ceph-mon[79643]: pgmap v652: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:52:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:52:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:59.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:59 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b14004710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:52:59 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:59 compute-1 podman[231258]: 2025-11-25 09:52:59.786907487 +0000 UTC m=+0.041754687 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 09:53:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:00.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:00 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:00 compute-1 ceph-mon[79643]: pgmap v653: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 106 KiB/s wr, 24 op/s
Nov 25 09:53:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:53:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:01.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:01 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:01 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b14004710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:01 compute-1 sudo[231276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:53:01 compute-1 sudo[231276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:01 compute-1 sudo[231276]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:02.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:02 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:02 compute-1 ceph-mon[79643]: pgmap v654: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 111 KiB/s wr, 24 op/s
Nov 25 09:53:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:03.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:03 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.398 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "ebc04c36-e114-4585-838e-99c2fcc19170" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.398 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.412 228687 DEBUG nova.compute.manager [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.471 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.471 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.475 228687 DEBUG nova.virt.hardware [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.476 228687 INFO nova.compute.claims [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Claim successful on node compute-1.ctlplane.example.com
Nov 25 09:53:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:03 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.561 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:53:03 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:53:03 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/756321123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.895 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.899 228687 DEBUG nova.compute.provider_tree [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.920 228687 DEBUG nova.scheduler.client.report [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.937 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.937 228687 DEBUG nova.compute.manager [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.973 228687 DEBUG nova.compute.manager [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.973 228687 DEBUG nova.network.neutron [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:53:03 compute-1 nova_compute[228683]: 2025-11-25 09:53:03.991 228687 INFO nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:53:04 compute-1 nova_compute[228683]: 2025-11-25 09:53:04.003 228687 DEBUG nova.compute.manager [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:53:04 compute-1 nova_compute[228683]: 2025-11-25 09:53:04.062 228687 DEBUG nova.compute.manager [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:53:04 compute-1 nova_compute[228683]: 2025-11-25 09:53:04.063 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:53:04 compute-1 nova_compute[228683]: 2025-11-25 09:53:04.063 228687 INFO nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Creating image(s)
Nov 25 09:53:04 compute-1 nova_compute[228683]: 2025-11-25 09:53:04.083 228687 DEBUG nova.storage.rbd_utils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image ebc04c36-e114-4585-838e-99c2fcc19170_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:53:04 compute-1 nova_compute[228683]: 2025-11-25 09:53:04.102 228687 DEBUG nova.storage.rbd_utils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image ebc04c36-e114-4585-838e-99c2fcc19170_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:53:04 compute-1 nova_compute[228683]: 2025-11-25 09:53:04.121 228687 DEBUG nova.storage.rbd_utils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image ebc04c36-e114-4585-838e-99c2fcc19170_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:53:04 compute-1 nova_compute[228683]: 2025-11-25 09:53:04.123 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:04 compute-1 nova_compute[228683]: 2025-11-25 09:53:04.124 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:04.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:04 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b14005ba0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:04 compute-1 nova_compute[228683]: 2025-11-25 09:53:04.472 228687 WARNING oslo_policy.policy [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 25 09:53:04 compute-1 nova_compute[228683]: 2025-11-25 09:53:04.472 228687 WARNING oslo_policy.policy [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 25 09:53:04 compute-1 nova_compute[228683]: 2025-11-25 09:53:04.474 228687 DEBUG nova.policy [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c92fada0e9fc4e9482d24b33b311d806', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:53:04 compute-1 nova_compute[228683]: 2025-11-25 09:53:04.492 228687 DEBUG nova.virt.libvirt.imagebackend [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image locations are: [{'url': 'rbd://af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/images/62ddd1b7-1bba-493e-a10f-b03a12ab3457/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/images/62ddd1b7-1bba-493e-a10f-b03a12ab3457/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 09:53:04 compute-1 ceph-mon[79643]: pgmap v655: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 25 09:53:04 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/756321123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:04.999 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:04.999 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:04.999 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:05.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.120 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.132 228687 DEBUG nova.network.neutron [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Successfully created port: a6bda4e1-b79a-4869-81eb-be1e41b174a8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.166 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.part --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.166 228687 DEBUG nova.virt.images [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] 62ddd1b7-1bba-493e-a10f-b03a12ab3457 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.167 228687 DEBUG nova.privsep.utils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.167 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.part /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:53:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:05 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.234 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.part /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.converted" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.237 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.284 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.converted --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.286 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.309 228687 DEBUG nova.storage.rbd_utils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image ebc04c36-e114-4585-838e-99c2fcc19170_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.314 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 ebc04c36-e114-4585-838e-99c2fcc19170_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.488 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 ebc04c36-e114-4585-838e-99c2fcc19170_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.545 228687 DEBUG nova.storage.rbd_utils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] resizing rbd image ebc04c36-e114-4585-838e-99c2fcc19170_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:53:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:05 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.611 228687 DEBUG nova.objects.instance [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'migration_context' on Instance uuid ebc04c36-e114-4585-838e-99c2fcc19170 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.622 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.622 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Ensure instance console log exists: /var/lib/nova/instances/ebc04c36-e114-4585-838e-99c2fcc19170/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.622 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.623 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:05 compute-1 nova_compute[228683]: 2025-11-25 09:53:05.623 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:06.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:06 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:06 compute-1 ceph-mon[79643]: pgmap v656: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 25 09:53:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:07.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:07 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b14005ba0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:07 compute-1 nova_compute[228683]: 2025-11-25 09:53:07.231 228687 DEBUG nova.network.neutron [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Successfully updated port: a6bda4e1-b79a-4869-81eb-be1e41b174a8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:53:07 compute-1 nova_compute[228683]: 2025-11-25 09:53:07.242 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-ebc04c36-e114-4585-838e-99c2fcc19170" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:53:07 compute-1 nova_compute[228683]: 2025-11-25 09:53:07.242 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-ebc04c36-e114-4585-838e-99c2fcc19170" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:53:07 compute-1 nova_compute[228683]: 2025-11-25 09:53:07.242 228687 DEBUG nova.network.neutron [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:53:07 compute-1 nova_compute[228683]: 2025-11-25 09:53:07.316 228687 DEBUG nova.compute.manager [req-50ce872e-3797-4ed2-9e2b-db5bd8cdf585 req-e9f0dfc6-cfc9-47dd-88f8-0f280f8a6b65 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Received event network-changed-a6bda4e1-b79a-4869-81eb-be1e41b174a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:53:07 compute-1 nova_compute[228683]: 2025-11-25 09:53:07.316 228687 DEBUG nova.compute.manager [req-50ce872e-3797-4ed2-9e2b-db5bd8cdf585 req-e9f0dfc6-cfc9-47dd-88f8-0f280f8a6b65 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Refreshing instance network info cache due to event network-changed-a6bda4e1-b79a-4869-81eb-be1e41b174a8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:53:07 compute-1 nova_compute[228683]: 2025-11-25 09:53:07.316 228687 DEBUG oslo_concurrency.lockutils [req-50ce872e-3797-4ed2-9e2b-db5bd8cdf585 req-e9f0dfc6-cfc9-47dd-88f8-0f280f8a6b65 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-ebc04c36-e114-4585-838e-99c2fcc19170" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:53:07 compute-1 nova_compute[228683]: 2025-11-25 09:53:07.382 228687 DEBUG nova.network.neutron [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:53:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:07 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.212 228687 DEBUG nova.network.neutron [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Updating instance_info_cache with network_info: [{"id": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "address": "fa:16:3e:8d:a3:f3", "network": {"id": "30eb507c-05e4-4b59-b0a4-24f2671c1d03", "bridge": "br-int", "label": "tempest-network-smoke--707364792", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6bda4e1-b7", "ovs_interfaceid": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.227 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-ebc04c36-e114-4585-838e-99c2fcc19170" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.227 228687 DEBUG nova.compute.manager [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Instance network_info: |[{"id": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "address": "fa:16:3e:8d:a3:f3", "network": {"id": "30eb507c-05e4-4b59-b0a4-24f2671c1d03", "bridge": "br-int", "label": "tempest-network-smoke--707364792", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6bda4e1-b7", "ovs_interfaceid": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.228 228687 DEBUG oslo_concurrency.lockutils [req-50ce872e-3797-4ed2-9e2b-db5bd8cdf585 req-e9f0dfc6-cfc9-47dd-88f8-0f280f8a6b65 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-ebc04c36-e114-4585-838e-99c2fcc19170" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.228 228687 DEBUG nova.network.neutron [req-50ce872e-3797-4ed2-9e2b-db5bd8cdf585 req-e9f0dfc6-cfc9-47dd-88f8-0f280f8a6b65 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Refreshing network info cache for port a6bda4e1-b79a-4869-81eb-be1e41b174a8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.230 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Start _get_guest_xml network_info=[{"id": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "address": "fa:16:3e:8d:a3:f3", "network": {"id": "30eb507c-05e4-4b59-b0a4-24f2671c1d03", "bridge": "br-int", "label": "tempest-network-smoke--707364792", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6bda4e1-b7", "ovs_interfaceid": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'boot_index': 0, 'device_name': '/dev/vda', 'image_id': '62ddd1b7-1bba-493e-a10f-b03a12ab3457'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.234 228687 WARNING nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.238 228687 DEBUG nova.virt.libvirt.host [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.238 228687 DEBUG nova.virt.libvirt.host [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.240 228687 DEBUG nova.virt.libvirt.host [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.241 228687 DEBUG nova.virt.libvirt.host [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.241 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.241 228687 DEBUG nova.virt.hardware [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T09:51:47Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d76f382e-b0e4-4c25-9fed-0129b4e3facf',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.241 228687 DEBUG nova.virt.hardware [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.242 228687 DEBUG nova.virt.hardware [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.242 228687 DEBUG nova.virt.hardware [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.242 228687 DEBUG nova.virt.hardware [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.242 228687 DEBUG nova.virt.hardware [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.242 228687 DEBUG nova.virt.hardware [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.243 228687 DEBUG nova.virt.hardware [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.243 228687 DEBUG nova.virt.hardware [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.243 228687 DEBUG nova.virt.hardware [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.243 228687 DEBUG nova.virt.hardware [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.246 228687 DEBUG nova.privsep.utils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.246 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:53:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:08.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:08 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:08 compute-1 ceph-mon[79643]: pgmap v657: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 09:53:08 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:53:08 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2638503104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.577 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.596 228687 DEBUG nova.storage.rbd_utils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image ebc04c36-e114-4585-838e-99c2fcc19170_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.599 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:53:08 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:53:08 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2800833503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.943 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.944 228687 DEBUG nova.virt.libvirt.vif [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:53:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-482640423',display_name='tempest-TestNetworkBasicOps-server-482640423',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-482640423',id=2,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ6Q4onBGYQmGEas7vROzTpaRR4Ngqnsoo6G6ugz9DEAPp+zmKsB8nXzUQcjDvfX2tQkqS/Ze0iyXFGNbAqUHjXI+hHfV06A35C79eX2YfRo8tSh9KoCjaYsESaPAenY8w==',key_name='tempest-TestNetworkBasicOps-1539694782',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-b7y8iz7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:53:04Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=ebc04c36-e114-4585-838e-99c2fcc19170,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "address": "fa:16:3e:8d:a3:f3", "network": {"id": "30eb507c-05e4-4b59-b0a4-24f2671c1d03", "bridge": "br-int", "label": "tempest-network-smoke--707364792", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6bda4e1-b7", "ovs_interfaceid": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.944 228687 DEBUG nova.network.os_vif_util [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "address": "fa:16:3e:8d:a3:f3", "network": {"id": "30eb507c-05e4-4b59-b0a4-24f2671c1d03", "bridge": "br-int", "label": "tempest-network-smoke--707364792", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6bda4e1-b7", "ovs_interfaceid": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.945 228687 DEBUG nova.network.os_vif_util [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:a3:f3,bridge_name='br-int',has_traffic_filtering=True,id=a6bda4e1-b79a-4869-81eb-be1e41b174a8,network=Network(30eb507c-05e4-4b59-b0a4-24f2671c1d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6bda4e1-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.946 228687 DEBUG nova.objects.instance [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'pci_devices' on Instance uuid ebc04c36-e114-4585-838e-99c2fcc19170 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.960 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:53:08 compute-1 nova_compute[228683]:   <uuid>ebc04c36-e114-4585-838e-99c2fcc19170</uuid>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   <name>instance-00000002</name>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   <memory>131072</memory>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   <vcpu>1</vcpu>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   <metadata>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <nova:name>tempest-TestNetworkBasicOps-server-482640423</nova:name>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <nova:creationTime>2025-11-25 09:53:08</nova:creationTime>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <nova:flavor name="m1.nano">
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <nova:memory>128</nova:memory>
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <nova:disk>1</nova:disk>
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <nova:swap>0</nova:swap>
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       </nova:flavor>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <nova:owner>
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       </nova:owner>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <nova:ports>
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <nova:port uuid="a6bda4e1-b79a-4869-81eb-be1e41b174a8">
Nov 25 09:53:08 compute-1 nova_compute[228683]:           <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:         </nova:port>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       </nova:ports>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     </nova:instance>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   </metadata>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   <sysinfo type="smbios">
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <system>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <entry name="serial">ebc04c36-e114-4585-838e-99c2fcc19170</entry>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <entry name="uuid">ebc04c36-e114-4585-838e-99c2fcc19170</entry>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     </system>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   </sysinfo>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   <os>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <boot dev="hd"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <smbios mode="sysinfo"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   </os>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   <features>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <acpi/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <apic/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <vmcoreinfo/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   </features>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   <clock offset="utc">
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <timer name="hpet" present="no"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   </clock>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   <cpu mode="host-model" match="exact">
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   </cpu>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   <devices>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <disk type="network" device="disk">
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <driver type="raw" cache="none"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <source protocol="rbd" name="vms/ebc04c36-e114-4585-838e-99c2fcc19170_disk">
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       </source>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <auth username="openstack">
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       </auth>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <target dev="vda" bus="virtio"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     </disk>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <disk type="network" device="cdrom">
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <driver type="raw" cache="none"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <source protocol="rbd" name="vms/ebc04c36-e114-4585-838e-99c2fcc19170_disk.config">
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       </source>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <auth username="openstack">
Nov 25 09:53:08 compute-1 nova_compute[228683]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       </auth>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <target dev="sda" bus="sata"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     </disk>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <interface type="ethernet">
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <mac address="fa:16:3e:8d:a3:f3"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <model type="virtio"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <mtu size="1442"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <target dev="tapa6bda4e1-b7"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     </interface>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <serial type="pty">
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <log file="/var/lib/nova/instances/ebc04c36-e114-4585-838e-99c2fcc19170/console.log" append="off"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     </serial>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <video>
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <model type="virtio"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     </video>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <input type="tablet" bus="usb"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <rng model="virtio">
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     </rng>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <controller type="usb" index="0"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     <memballoon model="virtio">
Nov 25 09:53:08 compute-1 nova_compute[228683]:       <stats period="10"/>
Nov 25 09:53:08 compute-1 nova_compute[228683]:     </memballoon>
Nov 25 09:53:08 compute-1 nova_compute[228683]:   </devices>
Nov 25 09:53:08 compute-1 nova_compute[228683]: </domain>
Nov 25 09:53:08 compute-1 nova_compute[228683]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.960 228687 DEBUG nova.compute.manager [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Preparing to wait for external event network-vif-plugged-a6bda4e1-b79a-4869-81eb-be1e41b174a8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.961 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.961 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.961 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.961 228687 DEBUG nova.virt.libvirt.vif [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:53:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-482640423',display_name='tempest-TestNetworkBasicOps-server-482640423',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-482640423',id=2,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ6Q4onBGYQmGEas7vROzTpaRR4Ngqnsoo6G6ugz9DEAPp+zmKsB8nXzUQcjDvfX2tQkqS/Ze0iyXFGNbAqUHjXI+hHfV06A35C79eX2YfRo8tSh9KoCjaYsESaPAenY8w==',key_name='tempest-TestNetworkBasicOps-1539694782',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-b7y8iz7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:53:04Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=ebc04c36-e114-4585-838e-99c2fcc19170,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "address": "fa:16:3e:8d:a3:f3", "network": {"id": "30eb507c-05e4-4b59-b0a4-24f2671c1d03", "bridge": "br-int", "label": "tempest-network-smoke--707364792", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6bda4e1-b7", "ovs_interfaceid": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.962 228687 DEBUG nova.network.os_vif_util [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "address": "fa:16:3e:8d:a3:f3", "network": {"id": "30eb507c-05e4-4b59-b0a4-24f2671c1d03", "bridge": "br-int", "label": "tempest-network-smoke--707364792", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6bda4e1-b7", "ovs_interfaceid": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.962 228687 DEBUG nova.network.os_vif_util [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:a3:f3,bridge_name='br-int',has_traffic_filtering=True,id=a6bda4e1-b79a-4869-81eb-be1e41b174a8,network=Network(30eb507c-05e4-4b59-b0a4-24f2671c1d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6bda4e1-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.962 228687 DEBUG os_vif [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:a3:f3,bridge_name='br-int',has_traffic_filtering=True,id=a6bda4e1-b79a-4869-81eb-be1e41b174a8,network=Network(30eb507c-05e4-4b59-b0a4-24f2671c1d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6bda4e1-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.989 228687 DEBUG ovsdbapp.backend.ovs_idl [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.989 228687 DEBUG ovsdbapp.backend.ovs_idl [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.989 228687 DEBUG ovsdbapp.backend.ovs_idl [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.990 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.990 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.990 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.991 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.994 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:08 compute-1 nova_compute[228683]: 2025-11-25 09:53:08.995 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.003 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.003 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.004 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.004 228687 INFO oslo.privsep.daemon [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp473y3a5j/privsep.sock']
Nov 25 09:53:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:09.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:09 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.210 228687 DEBUG nova.network.neutron [req-50ce872e-3797-4ed2-9e2b-db5bd8cdf585 req-e9f0dfc6-cfc9-47dd-88f8-0f280f8a6b65 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Updated VIF entry in instance network info cache for port a6bda4e1-b79a-4869-81eb-be1e41b174a8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.211 228687 DEBUG nova.network.neutron [req-50ce872e-3797-4ed2-9e2b-db5bd8cdf585 req-e9f0dfc6-cfc9-47dd-88f8-0f280f8a6b65 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Updating instance_info_cache with network_info: [{"id": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "address": "fa:16:3e:8d:a3:f3", "network": {"id": "30eb507c-05e4-4b59-b0a4-24f2671c1d03", "bridge": "br-int", "label": "tempest-network-smoke--707364792", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6bda4e1-b7", "ovs_interfaceid": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.224 228687 DEBUG oslo_concurrency.lockutils [req-50ce872e-3797-4ed2-9e2b-db5bd8cdf585 req-e9f0dfc6-cfc9-47dd-88f8-0f280f8a6b65 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-ebc04c36-e114-4585-838e-99c2fcc19170" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.517 228687 INFO oslo.privsep.daemon [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Spawned new privsep daemon via rootwrap
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.445 231570 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.449 231570 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.450 231570 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.450 231570 INFO oslo.privsep.daemon [-] privsep daemon running as pid 231570
Nov 25 09:53:09 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2638503104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:53:09 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2800833503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:53:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:09 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b14005ba0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.712 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.760 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.760 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6bda4e1-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.761 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6bda4e1-b7, col_values=(('external_ids', {'iface-id': 'a6bda4e1-b79a-4869-81eb-be1e41b174a8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:a3:f3', 'vm-uuid': 'ebc04c36-e114-4585-838e-99c2fcc19170'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.762 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:09 compute-1 NetworkManager[48856]: <info>  [1764064389.7628] manager: (tapa6bda4e1-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.765 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.766 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.767 228687 INFO os_vif [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:a3:f3,bridge_name='br-int',has_traffic_filtering=True,id=a6bda4e1-b79a-4869-81eb-be1e41b174a8,network=Network(30eb507c-05e4-4b59-b0a4-24f2671c1d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6bda4e1-b7')
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.801 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.802 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.802 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:8d:a3:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.803 228687 INFO nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Using config drive
Nov 25 09:53:09 compute-1 nova_compute[228683]: 2025-11-25 09:53:09.819 228687 DEBUG nova.storage.rbd_utils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image ebc04c36-e114-4585-838e-99c2fcc19170_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.219 228687 INFO nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Creating config drive at /var/lib/nova/instances/ebc04c36-e114-4585-838e-99c2fcc19170/disk.config
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.223 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ebc04c36-e114-4585-838e-99c2fcc19170/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6tvh32hi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.345 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ebc04c36-e114-4585-838e-99c2fcc19170/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6tvh32hi" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.364 228687 DEBUG nova.storage.rbd_utils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image ebc04c36-e114-4585-838e-99c2fcc19170_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.366 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ebc04c36-e114-4585-838e-99c2fcc19170/disk.config ebc04c36-e114-4585-838e-99c2fcc19170_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.445 228687 DEBUG oslo_concurrency.processutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ebc04c36-e114-4585-838e-99c2fcc19170/disk.config ebc04c36-e114-4585-838e-99c2fcc19170_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.446 228687 INFO nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Deleting local config drive /var/lib/nova/instances/ebc04c36-e114-4585-838e-99c2fcc19170/disk.config because it was imported into RBD.
Nov 25 09:53:10 compute-1 systemd[1]: Starting libvirt secret daemon...
Nov 25 09:53:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:10.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:10 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:10 compute-1 systemd[1]: Started libvirt secret daemon.
Nov 25 09:53:10 compute-1 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 25 09:53:10 compute-1 kernel: tapa6bda4e1-b7: entered promiscuous mode
Nov 25 09:53:10 compute-1 NetworkManager[48856]: <info>  [1764064390.5172] manager: (tapa6bda4e1-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Nov 25 09:53:10 compute-1 ovn_controller[133620]: 2025-11-25T09:53:10Z|00027|binding|INFO|Claiming lport a6bda4e1-b79a-4869-81eb-be1e41b174a8 for this chassis.
Nov 25 09:53:10 compute-1 ovn_controller[133620]: 2025-11-25T09:53:10Z|00028|binding|INFO|a6bda4e1-b79a-4869-81eb-be1e41b174a8: Claiming fa:16:3e:8d:a3:f3 10.100.0.30
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.518 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:10 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:10.527 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:a3:f3 10.100.0.30'], port_security=['fa:16:3e:8d:a3:f3 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': 'ebc04c36-e114-4585-838e-99c2fcc19170', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30eb507c-05e4-4b59-b0a4-24f2671c1d03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'acc2427b-6565-4546-b85c-2f3acc2af26b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e4678d8-9fea-4fa2-ab48-aa18df9b47d7, chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>], logical_port=a6bda4e1-b79a-4869-81eb-be1e41b174a8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:53:10 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:10.528 142940 INFO neutron.agent.ovn.metadata.agent [-] Port a6bda4e1-b79a-4869-81eb-be1e41b174a8 in datapath 30eb507c-05e4-4b59-b0a4-24f2671c1d03 bound to our chassis
Nov 25 09:53:10 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:10.530 142940 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30eb507c-05e4-4b59-b0a4-24f2671c1d03
Nov 25 09:53:10 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:10.530 142940 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpl4btd0_s/privsep.sock']
Nov 25 09:53:10 compute-1 systemd-udevd[231672]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:53:10 compute-1 NetworkManager[48856]: <info>  [1764064390.5646] device (tapa6bda4e1-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:53:10 compute-1 NetworkManager[48856]: <info>  [1764064390.5653] device (tapa6bda4e1-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:53:10 compute-1 systemd-machined[192680]: New machine qemu-1-instance-00000002.
Nov 25 09:53:10 compute-1 ceph-mon[79643]: pgmap v658: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.569 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:10 compute-1 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.575 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:10 compute-1 ovn_controller[133620]: 2025-11-25T09:53:10Z|00029|binding|INFO|Setting lport a6bda4e1-b79a-4869-81eb-be1e41b174a8 ovn-installed in OVS
Nov 25 09:53:10 compute-1 ovn_controller[133620]: 2025-11-25T09:53:10Z|00030|binding|INFO|Setting lport a6bda4e1-b79a-4869-81eb-be1e41b174a8 up in Southbound
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.578 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.729 228687 DEBUG nova.compute.manager [req-9083fc6f-80a4-4e4c-b987-39699dafbba1 req-4c182de8-6170-4fb7-bf87-06307e051e4e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Received event network-vif-plugged-a6bda4e1-b79a-4869-81eb-be1e41b174a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.730 228687 DEBUG oslo_concurrency.lockutils [req-9083fc6f-80a4-4e4c-b987-39699dafbba1 req-4c182de8-6170-4fb7-bf87-06307e051e4e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.730 228687 DEBUG oslo_concurrency.lockutils [req-9083fc6f-80a4-4e4c-b987-39699dafbba1 req-4c182de8-6170-4fb7-bf87-06307e051e4e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.731 228687 DEBUG oslo_concurrency.lockutils [req-9083fc6f-80a4-4e4c-b987-39699dafbba1 req-4c182de8-6170-4fb7-bf87-06307e051e4e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:10 compute-1 nova_compute[228683]: 2025-11-25 09:53:10.731 228687 DEBUG nova.compute.manager [req-9083fc6f-80a4-4e4c-b987-39699dafbba1 req-4c182de8-6170-4fb7-bf87-06307e051e4e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Processing event network-vif-plugged-a6bda4e1-b79a-4869-81eb-be1e41b174a8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:53:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:11.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:11 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:11.057 142940 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 09:53:11 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:11.058 142940 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpl4btd0_s/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 09:53:11 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:10.985 231684 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 09:53:11 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:10.988 231684 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 09:53:11 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:10.990 231684 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Nov 25 09:53:11 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:10.990 231684 INFO oslo.privsep.daemon [-] privsep daemon running as pid 231684
Nov 25 09:53:11 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:11.059 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[870adc49-7e3d-4c50-9089-e3bd0cc9e822]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:11 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.430 228687 DEBUG nova.virt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Emitting event <LifecycleEvent: 1764064391.4295607, ebc04c36-e114-4585-838e-99c2fcc19170 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.430 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] VM Started (Lifecycle Event)
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.432 228687 DEBUG nova.compute.manager [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.440 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.442 228687 INFO nova.virt.libvirt.driver [-] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Instance spawned successfully.
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.442 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.444 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.446 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.456 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.456 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.456 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.457 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.457 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.457 228687 DEBUG nova.virt.libvirt.driver [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.460 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.460 228687 DEBUG nova.virt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Emitting event <LifecycleEvent: 1764064391.4296348, ebc04c36-e114-4585-838e-99c2fcc19170 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.461 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] VM Paused (Lifecycle Event)
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.497 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.499 228687 DEBUG nova.virt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Emitting event <LifecycleEvent: 1764064391.4395897, ebc04c36-e114-4585-838e-99c2fcc19170 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.500 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] VM Resumed (Lifecycle Event)
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.502 228687 INFO nova.compute.manager [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Took 7.44 seconds to spawn the instance on the hypervisor.
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.502 228687 DEBUG nova.compute.manager [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.518 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.520 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.536 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.549 228687 INFO nova.compute.manager [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Took 8.10 seconds to build instance.
Nov 25 09:53:11 compute-1 nova_compute[228683]: 2025-11-25 09:53:11.557 228687 DEBUG oslo_concurrency.lockutils [None req-92305fee-cfba-417f-9281-b5b8bea7fe55 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:11 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:11 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:11.582 231684 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:11 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:11.583 231684 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:11 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:11.584 231684 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:12.141 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9c032c-11d6-4515-b073-6fcfa3a9badc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:12.142 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap30eb507c-01 in ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:53:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:12.144 231684 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap30eb507c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:53:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:12.144 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[b1330abd-3dae-4a61-a16c-bfbaabcd2f3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:12.147 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[3e24ee26-7ffd-4739-bce0-c067999cd0fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:12.167 143047 DEBUG oslo.privsep.daemon [-] privsep: reply[794b71d8-63bb-47c4-a5cd-d2962bfec1b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:12.190 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[92b4b423-6d66-4c22-a1a5-340493cbbd00]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:12.191 142940 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpwtvjh35p/privsep.sock']
Nov 25 09:53:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:12 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b14006ca0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:12.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:12 compute-1 ceph-mon[79643]: pgmap v659: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 25 09:53:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:12.778 142940 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 09:53:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:12.779 142940 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpwtvjh35p/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 09:53:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:12.699 231741 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 09:53:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:12.703 231741 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 09:53:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:12.704 231741 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 25 09:53:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:12.705 231741 INFO oslo.privsep.daemon [-] privsep daemon running as pid 231741
Nov 25 09:53:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:12.781 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c3cd59-dcab-4059-a278-ad2d0de5274b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:12 compute-1 nova_compute[228683]: 2025-11-25 09:53:12.792 228687 DEBUG nova.compute.manager [req-a72f4144-4a13-4732-b269-369ee0592f1b req-6b799843-6822-40fa-9bf0-26331825605e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Received event network-vif-plugged-a6bda4e1-b79a-4869-81eb-be1e41b174a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:53:12 compute-1 nova_compute[228683]: 2025-11-25 09:53:12.792 228687 DEBUG oslo_concurrency.lockutils [req-a72f4144-4a13-4732-b269-369ee0592f1b req-6b799843-6822-40fa-9bf0-26331825605e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:12 compute-1 nova_compute[228683]: 2025-11-25 09:53:12.793 228687 DEBUG oslo_concurrency.lockutils [req-a72f4144-4a13-4732-b269-369ee0592f1b req-6b799843-6822-40fa-9bf0-26331825605e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:12 compute-1 nova_compute[228683]: 2025-11-25 09:53:12.793 228687 DEBUG oslo_concurrency.lockutils [req-a72f4144-4a13-4732-b269-369ee0592f1b req-6b799843-6822-40fa-9bf0-26331825605e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:12 compute-1 nova_compute[228683]: 2025-11-25 09:53:12.793 228687 DEBUG nova.compute.manager [req-a72f4144-4a13-4732-b269-369ee0592f1b req-6b799843-6822-40fa-9bf0-26331825605e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] No waiting events found dispatching network-vif-plugged-a6bda4e1-b79a-4869-81eb-be1e41b174a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:53:12 compute-1 nova_compute[228683]: 2025-11-25 09:53:12.794 228687 WARNING nova.compute.manager [req-a72f4144-4a13-4732-b269-369ee0592f1b req-6b799843-6822-40fa-9bf0-26331825605e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Received unexpected event network-vif-plugged-a6bda4e1-b79a-4869-81eb-be1e41b174a8 for instance with vm_state active and task_state None.
Nov 25 09:53:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:13.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.181 231741 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.181 231741 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.181 231741 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:13 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b14006ca0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:13 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.652 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[0e1d090b-85f2-4055-926d-3aa86cd63870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.668 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[e60b3322-25c6-4b65-802c-c0c69ec9e54d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:13 compute-1 NetworkManager[48856]: <info>  [1764064393.6696] manager: (tap30eb507c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Nov 25 09:53:13 compute-1 systemd-udevd[231754]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.697 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[e96d4ff3-0f0d-46fc-9c83-1900d1669e9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.701 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[fa368acd-fe86-4418-acb8-b3481f91d21e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:13 compute-1 NetworkManager[48856]: <info>  [1764064393.7169] device (tap30eb507c-00): carrier: link connected
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.721 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa7ddba-7d5d-49d6-b091-a28d206aced5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.735 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[ed58995d-7a04-49df-a12d-d16ef081c6bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30eb507c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:a8:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 317826, 'reachable_time': 43999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 231764, 'error': None, 'target': 'ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.745 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[76ece68f-864c-43d9-8baa-60fd34505730]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:a84d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 317826, 'tstamp': 317826}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 231765, 'error': None, 'target': 'ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.753 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b25841-2471-4b13-bf07-04f56d3ea6d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30eb507c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:a8:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 317826, 'reachable_time': 43999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 231766, 'error': None, 'target': 'ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.772 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[687b8173-2153-406b-82a6-628fce2af506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.808 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[180d5792-3f64-4698-b4d1-2970671c5008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.810 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30eb507c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.810 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.810 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30eb507c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:53:13 compute-1 nova_compute[228683]: 2025-11-25 09:53:13.812 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:13 compute-1 kernel: tap30eb507c-00: entered promiscuous mode
Nov 25 09:53:13 compute-1 NetworkManager[48856]: <info>  [1764064393.8139] manager: (tap30eb507c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 25 09:53:13 compute-1 nova_compute[228683]: 2025-11-25 09:53:13.814 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.816 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30eb507c-00, col_values=(('external_ids', {'iface-id': '1b02b627-1baa-4d81-8889-52aa0d134bf8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:53:13 compute-1 nova_compute[228683]: 2025-11-25 09:53:13.817 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:13 compute-1 nova_compute[228683]: 2025-11-25 09:53:13.817 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:13 compute-1 ovn_controller[133620]: 2025-11-25T09:53:13Z|00031|binding|INFO|Releasing lport 1b02b627-1baa-4d81-8889-52aa0d134bf8 from this chassis (sb_readonly=0)
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.820 142940 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/30eb507c-05e4-4b59-b0a4-24f2671c1d03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/30eb507c-05e4-4b59-b0a4-24f2671c1d03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.821 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[9217c140-a192-4862-90d6-4e677edc3a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.822 142940 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: global
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     log         /dev/log local0 debug
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     log-tag     haproxy-metadata-proxy-30eb507c-05e4-4b59-b0a4-24f2671c1d03
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     user        root
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     group       root
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     maxconn     1024
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     pidfile     /var/lib/neutron/external/pids/30eb507c-05e4-4b59-b0a4-24f2671c1d03.pid.haproxy
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     daemon
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: defaults
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     log global
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     mode http
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     option httplog
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     option dontlognull
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     option http-server-close
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     option forwardfor
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     retries                 3
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     timeout http-request    30s
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     timeout connect         30s
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     timeout client          32s
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     timeout server          32s
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     timeout http-keep-alive 30s
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: listen listener
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     bind 169.254.169.254:80
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:     http-request add-header X-OVN-Network-ID 30eb507c-05e4-4b59-b0a4-24f2671c1d03
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:53:13 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:13.823 142940 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03', 'env', 'PROCESS_TAG=haproxy-30eb507c-05e4-4b59-b0a4-24f2671c1d03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/30eb507c-05e4-4b59-b0a4-24f2671c1d03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:53:13 compute-1 nova_compute[228683]: 2025-11-25 09:53:13.831 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:14 compute-1 podman[231795]: 2025-11-25 09:53:14.129018514 +0000 UTC m=+0.039293677 container create 632aba2555d3440ca2e461e1a449dcd6ac85e812523c953fcfc4f1a716b83fe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:53:14 compute-1 systemd[1]: Started libpod-conmon-632aba2555d3440ca2e461e1a449dcd6ac85e812523c953fcfc4f1a716b83fe5.scope.
Nov 25 09:53:14 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:53:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/519750af1ea38874c665e49b65a08b7409f31c606a7fd6ad23d1b1876618d5ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:53:14 compute-1 podman[231795]: 2025-11-25 09:53:14.191972555 +0000 UTC m=+0.102247719 container init 632aba2555d3440ca2e461e1a449dcd6ac85e812523c953fcfc4f1a716b83fe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:53:14 compute-1 podman[231795]: 2025-11-25 09:53:14.199171292 +0000 UTC m=+0.109446456 container start 632aba2555d3440ca2e461e1a449dcd6ac85e812523c953fcfc4f1a716b83fe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:53:14 compute-1 podman[231795]: 2025-11-25 09:53:14.111752924 +0000 UTC m=+0.022028107 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:53:14 compute-1 neutron-haproxy-ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03[231806]: [NOTICE]   (231811) : New worker (231813) forked
Nov 25 09:53:14 compute-1 neutron-haproxy-ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03[231806]: [NOTICE]   (231811) : Loading success.
Nov 25 09:53:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:14 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:14.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:14 compute-1 ceph-mon[79643]: pgmap v660: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 25 09:53:14 compute-1 nova_compute[228683]: 2025-11-25 09:53:14.713 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:14 compute-1 nova_compute[228683]: 2025-11-25 09:53:14.762 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:14 compute-1 nova_compute[228683]: 2025-11-25 09:53:14.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:14 compute-1 nova_compute[228683]: 2025-11-25 09:53:14.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 09:53:14 compute-1 nova_compute[228683]: 2025-11-25 09:53:14.908 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 09:53:14 compute-1 nova_compute[228683]: 2025-11-25 09:53:14.908 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:14 compute-1 nova_compute[228683]: 2025-11-25 09:53:14.908 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 09:53:14 compute-1 nova_compute[228683]: 2025-11-25 09:53:14.915 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:15.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:15 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:15 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b14006ca0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:53:15 compute-1 podman[231819]: 2025-11-25 09:53:15.782768737 +0000 UTC m=+0.035725944 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 25 09:53:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:16 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:16.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:16 compute-1 ceph-mon[79643]: pgmap v661: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 25 09:53:16 compute-1 nova_compute[228683]: 2025-11-25 09:53:16.965 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:16 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:16.966 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:53:16 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:16.969 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:53:16 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:16.970 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ad0cdb86-b3c6-44c6-a890-1db2efa57d2b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:53:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:17.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:17 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:17 compute-1 nova_compute[228683]: 2025-11-25 09:53:17.918 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:17 compute-1 nova_compute[228683]: 2025-11-25 09:53:17.918 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:17 compute-1 nova_compute[228683]: 2025-11-25 09:53:17.919 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:17 compute-1 nova_compute[228683]: 2025-11-25 09:53:17.919 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:17 compute-1 nova_compute[228683]: 2025-11-25 09:53:17.934 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:17 compute-1 nova_compute[228683]: 2025-11-25 09:53:17.934 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:17 compute-1 nova_compute[228683]: 2025-11-25 09:53:17.934 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:17 compute-1 nova_compute[228683]: 2025-11-25 09:53:17.934 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:53:17 compute-1 nova_compute[228683]: 2025-11-25 09:53:17.934 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:53:18 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:53:18 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/850203280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.290 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.355 228687 DEBUG nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.356 228687 DEBUG nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:53:18 compute-1 NetworkManager[48856]: <info>  [1764064398.4656] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Nov 25 09:53:18 compute-1 NetworkManager[48856]: <info>  [1764064398.4661] device (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:53:18 compute-1 NetworkManager[48856]: <info>  [1764064398.4671] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Nov 25 09:53:18 compute-1 NetworkManager[48856]: <info>  [1764064398.4673] device (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:53:18 compute-1 NetworkManager[48856]: <info>  [1764064398.4681] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.466 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:18 compute-1 NetworkManager[48856]: <info>  [1764064398.4685] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 25 09:53:18 compute-1 NetworkManager[48856]: <info>  [1764064398.4689] device (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 09:53:18 compute-1 NetworkManager[48856]: <info>  [1764064398.4692] device (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 09:53:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:18 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b14006ca0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:18.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.537 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:18 compute-1 ovn_controller[133620]: 2025-11-25T09:53:18Z|00032|binding|INFO|Releasing lport 1b02b627-1baa-4d81-8889-52aa0d134bf8 from this chassis (sb_readonly=0)
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.543 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:18 compute-1 ceph-mon[79643]: pgmap v662: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 25 09:53:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1230150893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/850203280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.672 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.673 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4883MB free_disk=59.92179870605469GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.673 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.674 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.757 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Instance ebc04c36-e114-4585-838e-99c2fcc19170 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.757 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.758 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.818 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Refreshing inventories for resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.855 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Updating ProviderTree inventory for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.855 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Updating inventory in ProviderTree for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.867 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Refreshing aggregate associations for resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.885 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Refreshing trait associations for resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SHA,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX512VAES,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 09:53:18 compute-1 nova_compute[228683]: 2025-11-25 09:53:18.913 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:53:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:53:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:19.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:53:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:19 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:19 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:53:19 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3047445895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:19 compute-1 nova_compute[228683]: 2025-11-25 09:53:19.263 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:53:19 compute-1 nova_compute[228683]: 2025-11-25 09:53:19.267 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Updating inventory in ProviderTree for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 with inventory: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:53:19 compute-1 nova_compute[228683]: 2025-11-25 09:53:19.300 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Updated inventory for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 25 09:53:19 compute-1 nova_compute[228683]: 2025-11-25 09:53:19.300 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Updating resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 09:53:19 compute-1 nova_compute[228683]: 2025-11-25 09:53:19.301 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Updating inventory in ProviderTree for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 with inventory: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:53:19 compute-1 nova_compute[228683]: 2025-11-25 09:53:19.316 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:53:19 compute-1 nova_compute[228683]: 2025-11-25 09:53:19.317 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:19 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/259020438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3047445895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:19 compute-1 nova_compute[228683]: 2025-11-25 09:53:19.714 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:19 compute-1 nova_compute[228683]: 2025-11-25 09:53:19.763 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:20 compute-1 nova_compute[228683]: 2025-11-25 09:53:20.291 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:20 compute-1 nova_compute[228683]: 2025-11-25 09:53:20.292 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:53:20 compute-1 nova_compute[228683]: 2025-11-25 09:53:20.292 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:53:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:20 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b18002600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:20.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:20 compute-1 ceph-mon[79643]: pgmap v663: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:53:21 compute-1 nova_compute[228683]: 2025-11-25 09:53:21.057 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "refresh_cache-ebc04c36-e114-4585-838e-99c2fcc19170" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:53:21 compute-1 nova_compute[228683]: 2025-11-25 09:53:21.057 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquired lock "refresh_cache-ebc04c36-e114-4585-838e-99c2fcc19170" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:53:21 compute-1 nova_compute[228683]: 2025-11-25 09:53:21.057 228687 DEBUG nova.network.neutron [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 09:53:21 compute-1 nova_compute[228683]: 2025-11-25 09:53:21.057 228687 DEBUG nova.objects.instance [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ebc04c36-e114-4585-838e-99c2fcc19170 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:53:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:21.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:21 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:21 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 09:53:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:21 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1336917698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:21 compute-1 sudo[231886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:53:21 compute-1 sudo[231886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:21 compute-1 sudo[231886]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:21 compute-1 nova_compute[228683]: 2025-11-25 09:53:21.905 228687 DEBUG nova.network.neutron [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Updating instance_info_cache with network_info: [{"id": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "address": "fa:16:3e:8d:a3:f3", "network": {"id": "30eb507c-05e4-4b59-b0a4-24f2671c1d03", "bridge": "br-int", "label": "tempest-network-smoke--707364792", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6bda4e1-b7", "ovs_interfaceid": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:53:21 compute-1 nova_compute[228683]: 2025-11-25 09:53:21.918 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Releasing lock "refresh_cache-ebc04c36-e114-4585-838e-99c2fcc19170" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:53:21 compute-1 nova_compute[228683]: 2025-11-25 09:53:21.919 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 09:53:21 compute-1 nova_compute[228683]: 2025-11-25 09:53:21.919 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:21 compute-1 nova_compute[228683]: 2025-11-25 09:53:21.919 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:21 compute-1 nova_compute[228683]: 2025-11-25 09:53:21.919 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:21 compute-1 nova_compute[228683]: 2025-11-25 09:53:21.920 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:21 compute-1 nova_compute[228683]: 2025-11-25 09:53:21.920 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:53:22 compute-1 ovn_controller[133620]: 2025-11-25T09:53:22Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8d:a3:f3 10.100.0.30
Nov 25 09:53:22 compute-1 ovn_controller[133620]: 2025-11-25T09:53:22Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8d:a3:f3 10.100.0.30
Nov 25 09:53:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:22 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:22.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:22 compute-1 ceph-mon[79643]: pgmap v664: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Nov 25 09:53:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1424670342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:23.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:23 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:23 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:24 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:24.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:24 compute-1 ceph-mon[79643]: pgmap v665: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.7 KiB/s wr, 70 op/s
Nov 25 09:53:24 compute-1 nova_compute[228683]: 2025-11-25 09:53:24.716 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:24 compute-1 nova_compute[228683]: 2025-11-25 09:53:24.764 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:24 compute-1 podman[231913]: 2025-11-25 09:53:24.818057434 +0000 UTC m=+0.071097119 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 09:53:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:25.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:25 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:25 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:26 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:26.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:26 compute-1 ceph-mon[79643]: pgmap v666: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.7 KiB/s wr, 70 op/s
Nov 25 09:53:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:27.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:27 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b18003140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:27 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:28 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:28.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:28 compute-1 ceph-mon[79643]: pgmap v667: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Nov 25 09:53:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:29.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:29 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:29 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:29 compute-1 nova_compute[228683]: 2025-11-25 09:53:29.719 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:29 compute-1 nova_compute[228683]: 2025-11-25 09:53:29.766 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:30 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:30.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:30 compute-1 ceph-mon[79643]: pgmap v668: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:53:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:53:30 compute-1 podman[231940]: 2025-11-25 09:53:30.787837864 +0000 UTC m=+0.041309848 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 09:53:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:31.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:31 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:31 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b18003af0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.329 228687 DEBUG oslo_concurrency.lockutils [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "ebc04c36-e114-4585-838e-99c2fcc19170" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.329 228687 DEBUG oslo_concurrency.lockutils [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.329 228687 DEBUG oslo_concurrency.lockutils [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.329 228687 DEBUG oslo_concurrency.lockutils [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.329 228687 DEBUG oslo_concurrency.lockutils [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.330 228687 INFO nova.compute.manager [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Terminating instance
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.331 228687 DEBUG nova.compute.manager [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:53:32 compute-1 kernel: tapa6bda4e1-b7 (unregistering): left promiscuous mode
Nov 25 09:53:32 compute-1 NetworkManager[48856]: <info>  [1764064412.3667] device (tapa6bda4e1-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:53:32 compute-1 ovn_controller[133620]: 2025-11-25T09:53:32Z|00033|binding|INFO|Releasing lport a6bda4e1-b79a-4869-81eb-be1e41b174a8 from this chassis (sb_readonly=0)
Nov 25 09:53:32 compute-1 ovn_controller[133620]: 2025-11-25T09:53:32Z|00034|binding|INFO|Setting lport a6bda4e1-b79a-4869-81eb-be1e41b174a8 down in Southbound
Nov 25 09:53:32 compute-1 ovn_controller[133620]: 2025-11-25T09:53:32Z|00035|binding|INFO|Removing iface tapa6bda4e1-b7 ovn-installed in OVS
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.370 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:32 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:32.380 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:a3:f3 10.100.0.30'], port_security=['fa:16:3e:8d:a3:f3 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': 'ebc04c36-e114-4585-838e-99c2fcc19170', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30eb507c-05e4-4b59-b0a4-24f2671c1d03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'acc2427b-6565-4546-b85c-2f3acc2af26b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e4678d8-9fea-4fa2-ab48-aa18df9b47d7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>], logical_port=a6bda4e1-b79a-4869-81eb-be1e41b174a8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:53:32 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:32.381 142940 INFO neutron.agent.ovn.metadata.agent [-] Port a6bda4e1-b79a-4869-81eb-be1e41b174a8 in datapath 30eb507c-05e4-4b59-b0a4-24f2671c1d03 unbound from our chassis
Nov 25 09:53:32 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:32.382 142940 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 30eb507c-05e4-4b59-b0a4-24f2671c1d03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:53:32 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:32.382 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[21997da5-cc5f-4f82-a8cd-e38b3a5397bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:32 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:32.383 142940 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03 namespace which is not needed anymore
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.389 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:32 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 25 09:53:32 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 11.660s CPU time.
Nov 25 09:53:32 compute-1 systemd-machined[192680]: Machine qemu-1-instance-00000002 terminated.
Nov 25 09:53:32 compute-1 neutron-haproxy-ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03[231806]: [NOTICE]   (231811) : haproxy version is 2.8.14-c23fe91
Nov 25 09:53:32 compute-1 neutron-haproxy-ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03[231806]: [NOTICE]   (231811) : path to executable is /usr/sbin/haproxy
Nov 25 09:53:32 compute-1 neutron-haproxy-ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03[231806]: [ALERT]    (231811) : Current worker (231813) exited with code 143 (Terminated)
Nov 25 09:53:32 compute-1 neutron-haproxy-ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03[231806]: [WARNING]  (231811) : All workers exited. Exiting... (0)
Nov 25 09:53:32 compute-1 systemd[1]: libpod-632aba2555d3440ca2e461e1a449dcd6ac85e812523c953fcfc4f1a716b83fe5.scope: Deactivated successfully.
Nov 25 09:53:32 compute-1 podman[231980]: 2025-11-25 09:53:32.480955011 +0000 UTC m=+0.034521924 container died 632aba2555d3440ca2e461e1a449dcd6ac85e812523c953fcfc4f1a716b83fe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 09:53:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:32 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:32 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-632aba2555d3440ca2e461e1a449dcd6ac85e812523c953fcfc4f1a716b83fe5-userdata-shm.mount: Deactivated successfully.
Nov 25 09:53:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-519750af1ea38874c665e49b65a08b7409f31c606a7fd6ad23d1b1876618d5ba-merged.mount: Deactivated successfully.
Nov 25 09:53:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:32.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:32 compute-1 podman[231980]: 2025-11-25 09:53:32.50390754 +0000 UTC m=+0.057474454 container cleanup 632aba2555d3440ca2e461e1a449dcd6ac85e812523c953fcfc4f1a716b83fe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:53:32 compute-1 systemd[1]: libpod-conmon-632aba2555d3440ca2e461e1a449dcd6ac85e812523c953fcfc4f1a716b83fe5.scope: Deactivated successfully.
Nov 25 09:53:32 compute-1 systemd-udevd[231962]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:53:32 compute-1 NetworkManager[48856]: <info>  [1764064412.5428] manager: (tapa6bda4e1-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Nov 25 09:53:32 compute-1 kernel: tapa6bda4e1-b7: entered promiscuous mode
Nov 25 09:53:32 compute-1 kernel: tapa6bda4e1-b7 (unregistering): left promiscuous mode
Nov 25 09:53:32 compute-1 podman[232005]: 2025-11-25 09:53:32.545368884 +0000 UTC m=+0.026463988 container remove 632aba2555d3440ca2e461e1a449dcd6ac85e812523c953fcfc4f1a716b83fe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.551 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:32 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:32.556 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[a84e6dfd-34b9-4890-8e0a-63c6d07217a1]: (4, ('Tue Nov 25 09:53:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03 (632aba2555d3440ca2e461e1a449dcd6ac85e812523c953fcfc4f1a716b83fe5)\n632aba2555d3440ca2e461e1a449dcd6ac85e812523c953fcfc4f1a716b83fe5\nTue Nov 25 09:53:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03 (632aba2555d3440ca2e461e1a449dcd6ac85e812523c953fcfc4f1a716b83fe5)\n632aba2555d3440ca2e461e1a449dcd6ac85e812523c953fcfc4f1a716b83fe5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:32 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:32.558 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c8e840-cbbb-4265-97d7-884c4ad446d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.558 228687 INFO nova.virt.libvirt.driver [-] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Instance destroyed successfully.
Nov 25 09:53:32 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:32.559 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30eb507c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.558 228687 DEBUG nova.objects.instance [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'resources' on Instance uuid ebc04c36-e114-4585-838e-99c2fcc19170 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.560 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:32 compute-1 kernel: tap30eb507c-00: left promiscuous mode
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.577 228687 DEBUG nova.virt.libvirt.vif [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:53:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-482640423',display_name='tempest-TestNetworkBasicOps-server-482640423',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-482640423',id=2,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ6Q4onBGYQmGEas7vROzTpaRR4Ngqnsoo6G6ugz9DEAPp+zmKsB8nXzUQcjDvfX2tQkqS/Ze0iyXFGNbAqUHjXI+hHfV06A35C79eX2YfRo8tSh9KoCjaYsESaPAenY8w==',key_name='tempest-TestNetworkBasicOps-1539694782',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:53:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-b7y8iz7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:53:11Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=ebc04c36-e114-4585-838e-99c2fcc19170,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "address": "fa:16:3e:8d:a3:f3", "network": {"id": "30eb507c-05e4-4b59-b0a4-24f2671c1d03", "bridge": "br-int", "label": "tempest-network-smoke--707364792", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6bda4e1-b7", "ovs_interfaceid": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.577 228687 DEBUG nova.network.os_vif_util [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "address": "fa:16:3e:8d:a3:f3", "network": {"id": "30eb507c-05e4-4b59-b0a4-24f2671c1d03", "bridge": "br-int", "label": "tempest-network-smoke--707364792", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6bda4e1-b7", "ovs_interfaceid": "a6bda4e1-b79a-4869-81eb-be1e41b174a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.577 228687 DEBUG nova.network.os_vif_util [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:a3:f3,bridge_name='br-int',has_traffic_filtering=True,id=a6bda4e1-b79a-4869-81eb-be1e41b174a8,network=Network(30eb507c-05e4-4b59-b0a4-24f2671c1d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6bda4e1-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:53:32 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:32.577 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[a2018ac9-cbcd-4e95-9162-f6d67a712b66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.578 228687 DEBUG os_vif [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:a3:f3,bridge_name='br-int',has_traffic_filtering=True,id=a6bda4e1-b79a-4869-81eb-be1e41b174a8,network=Network(30eb507c-05e4-4b59-b0a4-24f2671c1d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6bda4e1-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.579 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.580 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6bda4e1-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.580 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.581 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.583 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:53:32 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:32.586 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[087715e8-dd0e-4040-af40-cd7af78b3f72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.586 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:32 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:32.587 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f8922f-929a-49f7-89f4-e73d38d6aa5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.588 228687 INFO os_vif [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:a3:f3,bridge_name='br-int',has_traffic_filtering=True,id=a6bda4e1-b79a-4869-81eb-be1e41b174a8,network=Network(30eb507c-05e4-4b59-b0a4-24f2671c1d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6bda4e1-b7')
Nov 25 09:53:32 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:32.598 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[81671157-cee8-4f7f-a0e5-c733c2a954ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 317819, 'reachable_time': 36436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 232025, 'error': None, 'target': 'ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.604 228687 DEBUG nova.compute.manager [req-60c6051a-477c-4b14-bee8-64a02e00ebdf req-f693ead3-24b4-498e-82ee-ffe6e523f90b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Received event network-vif-unplugged-a6bda4e1-b79a-4869-81eb-be1e41b174a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.604 228687 DEBUG oslo_concurrency.lockutils [req-60c6051a-477c-4b14-bee8-64a02e00ebdf req-f693ead3-24b4-498e-82ee-ffe6e523f90b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.604 228687 DEBUG oslo_concurrency.lockutils [req-60c6051a-477c-4b14-bee8-64a02e00ebdf req-f693ead3-24b4-498e-82ee-ffe6e523f90b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.604 228687 DEBUG oslo_concurrency.lockutils [req-60c6051a-477c-4b14-bee8-64a02e00ebdf req-f693ead3-24b4-498e-82ee-ffe6e523f90b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.604 228687 DEBUG nova.compute.manager [req-60c6051a-477c-4b14-bee8-64a02e00ebdf req-f693ead3-24b4-498e-82ee-ffe6e523f90b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] No waiting events found dispatching network-vif-unplugged-a6bda4e1-b79a-4869-81eb-be1e41b174a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.604 228687 DEBUG nova.compute.manager [req-60c6051a-477c-4b14-bee8-64a02e00ebdf req-f693ead3-24b4-498e-82ee-ffe6e523f90b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Received event network-vif-unplugged-a6bda4e1-b79a-4869-81eb-be1e41b174a8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:53:32 compute-1 systemd[1]: run-netns-ovnmeta\x2d30eb507c\x2d05e4\x2d4b59\x2db0a4\x2d24f2671c1d03.mount: Deactivated successfully.
Nov 25 09:53:32 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:32.609 143047 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-30eb507c-05e4-4b59-b0a4-24f2671c1d03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:53:32 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:53:32.609 143047 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ba0be0-626a-4349-bce7-f7d21f673e26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:32 compute-1 ceph-mon[79643]: pgmap v669: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.736 228687 INFO nova.virt.libvirt.driver [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Deleting instance files /var/lib/nova/instances/ebc04c36-e114-4585-838e-99c2fcc19170_del
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.736 228687 INFO nova.virt.libvirt.driver [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Deletion of /var/lib/nova/instances/ebc04c36-e114-4585-838e-99c2fcc19170_del complete
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.792 228687 DEBUG nova.virt.libvirt.host [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.792 228687 INFO nova.virt.libvirt.host [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] UEFI support detected
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.793 228687 INFO nova.compute.manager [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Took 0.46 seconds to destroy the instance on the hypervisor.
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.793 228687 DEBUG oslo.service.loopingcall [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.794 228687 DEBUG nova.compute.manager [-] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:53:32 compute-1 nova_compute[228683]: 2025-11-25 09:53:32.794 228687 DEBUG nova.network.neutron [-] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:53:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:33.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:33 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:33 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.134 228687 DEBUG nova.network.neutron [-] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.143 228687 INFO nova.compute.manager [-] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Took 1.35 seconds to deallocate network for instance.
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.171 228687 DEBUG oslo_concurrency.lockutils [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.171 228687 DEBUG oslo_concurrency.lockutils [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.220 228687 DEBUG oslo_concurrency.processutils [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:53:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:34 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b18003af0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:34.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:34 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:53:34 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3778458874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.554 228687 DEBUG oslo_concurrency.processutils [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.557 228687 DEBUG nova.compute.provider_tree [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.584 228687 DEBUG nova.scheduler.client.report [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.603 228687 DEBUG oslo_concurrency.lockutils [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.635 228687 INFO nova.scheduler.client.report [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Deleted allocations for instance ebc04c36-e114-4585-838e-99c2fcc19170
Nov 25 09:53:34 compute-1 ceph-mon[79643]: pgmap v670: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:53:34 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3778458874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.683 228687 DEBUG nova.compute.manager [req-3802a639-ec91-40b9-b9e5-3c7818169c40 req-b93a86b2-caa1-4f70-8cbf-4d01868ded27 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Received event network-vif-plugged-a6bda4e1-b79a-4869-81eb-be1e41b174a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.684 228687 DEBUG oslo_concurrency.lockutils [req-3802a639-ec91-40b9-b9e5-3c7818169c40 req-b93a86b2-caa1-4f70-8cbf-4d01868ded27 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.684 228687 DEBUG oslo_concurrency.lockutils [req-3802a639-ec91-40b9-b9e5-3c7818169c40 req-b93a86b2-caa1-4f70-8cbf-4d01868ded27 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.684 228687 DEBUG oslo_concurrency.lockutils [req-3802a639-ec91-40b9-b9e5-3c7818169c40 req-b93a86b2-caa1-4f70-8cbf-4d01868ded27 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.685 228687 DEBUG nova.compute.manager [req-3802a639-ec91-40b9-b9e5-3c7818169c40 req-b93a86b2-caa1-4f70-8cbf-4d01868ded27 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] No waiting events found dispatching network-vif-plugged-a6bda4e1-b79a-4869-81eb-be1e41b174a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.685 228687 WARNING nova.compute.manager [req-3802a639-ec91-40b9-b9e5-3c7818169c40 req-b93a86b2-caa1-4f70-8cbf-4d01868ded27 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Received unexpected event network-vif-plugged-a6bda4e1-b79a-4869-81eb-be1e41b174a8 for instance with vm_state deleted and task_state None.
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.685 228687 DEBUG nova.compute.manager [req-3802a639-ec91-40b9-b9e5-3c7818169c40 req-b93a86b2-caa1-4f70-8cbf-4d01868ded27 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Received event network-vif-deleted-a6bda4e1-b79a-4869-81eb-be1e41b174a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.701 228687 DEBUG oslo_concurrency.lockutils [None req-867e5aac-bea3-4867-b751-007c29893dea c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ebc04c36-e114-4585-838e-99c2fcc19170" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:34 compute-1 nova_compute[228683]: 2025-11-25 09:53:34.720 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:35.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:35 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:35 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:36 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:36.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:36 compute-1 ceph-mon[79643]: pgmap v671: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:53:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:37 compute-1 nova_compute[228683]: 2025-11-25 09:53:37.078 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:37.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:37 compute-1 nova_compute[228683]: 2025-11-25 09:53:37.184 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:37 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b180049a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:37 compute-1 nova_compute[228683]: 2025-11-25 09:53:37.581 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:37 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:38 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:38.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:38 compute-1 ceph-mon[79643]: pgmap v672: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 302 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Nov 25 09:53:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:39.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:39 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:39 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b180049a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:39 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1805383330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:39 compute-1 nova_compute[228683]: 2025-11-25 09:53:39.721 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:40 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:40.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:40 compute-1 ceph-mon[79643]: pgmap v673: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 24 KiB/s wr, 30 op/s
Nov 25 09:53:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:41.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:41 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:41 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:41 compute-1 sudo[232074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:53:41 compute-1 sudo[232074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:41 compute-1 sudo[232074]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:42 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b180049a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:42.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:42 compute-1 nova_compute[228683]: 2025-11-25 09:53:42.582 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:42 compute-1 ceph-mon[79643]: pgmap v674: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 25 KiB/s wr, 236 op/s
Nov 25 09:53:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:43.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:43 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:43 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:44 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:44.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:44 compute-1 ceph-mon[79643]: pgmap v675: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 11 KiB/s wr, 235 op/s
Nov 25 09:53:44 compute-1 nova_compute[228683]: 2025-11-25 09:53:44.723 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:45.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:45 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:45 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b200bfa20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:53:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:46 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:46.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:46 compute-1 ceph-mon[79643]: pgmap v676: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 11 KiB/s wr, 235 op/s
Nov 25 09:53:46 compute-1 podman[232103]: 2025-11-25 09:53:46.784923822 +0000 UTC m=+0.038857747 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:53:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:47.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:47 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0af40040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:47 compute-1 nova_compute[228683]: 2025-11-25 09:53:47.557 228687 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764064412.556701, ebc04c36-e114-4585-838e-99c2fcc19170 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:53:47 compute-1 nova_compute[228683]: 2025-11-25 09:53:47.558 228687 INFO nova.compute.manager [-] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] VM Stopped (Lifecycle Event)
Nov 25 09:53:47 compute-1 nova_compute[228683]: 2025-11-25 09:53:47.575 228687 DEBUG nova.compute.manager [None req-67c2511b-ff6b-40a4-b836-ab89cdca8bf9 - - - - - -] [instance: ebc04c36-e114-4585-838e-99c2fcc19170] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:53:47 compute-1 nova_compute[228683]: 2025-11-25 09:53:47.583 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:47 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0afc0061c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:48 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:48.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:48 compute-1 ceph-mon[79643]: pgmap v677: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 11 KiB/s wr, 235 op/s
Nov 25 09:53:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:49.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:49 compute-1 kernel: ganesha.nfsd[230892]: segfault at 50 ip 00007f0badee832e sp 00007f0b7dffa210 error 4 in libntirpc.so.5.8[7f0badecd000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 25 09:53:49 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:53:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[230779]: 25/11/2025 09:53:49 : epoch 69257c40 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0b00009990 fd 39 proxy ignored for local
Nov 25 09:53:49 compute-1 systemd[1]: Started Process Core Dump (PID 232122/UID 0).
Nov 25 09:53:49 compute-1 sudo[232124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:53:49 compute-1 sudo[232124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:49 compute-1 nova_compute[228683]: 2025-11-25 09:53:49.724 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:49 compute-1 sudo[232124]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:49 compute-1 sudo[232149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:53:49 compute-1 sudo[232149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:50 compute-1 sudo[232149]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:50.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:50 compute-1 systemd-coredump[232123]: Process 230783 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 41:
                                                    #0  0x00007f0badee832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:53:50 compute-1 systemd[1]: systemd-coredump@8-232122-0.service: Deactivated successfully.
Nov 25 09:53:50 compute-1 systemd[1]: systemd-coredump@8-232122-0.service: Consumed 1.329s CPU time.
Nov 25 09:53:50 compute-1 podman[232208]: 2025-11-25 09:53:50.668654216 +0000 UTC m=+0.020287937 container died b1d43a07869ef363b5536df3d09cfa3ca82ffb4b1e5c8c523eee9fb43bf2425b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:53:50 compute-1 systemd[1]: var-lib-containers-storage-overlay-56eca2dfc04db3e8f5e2bfc8318ddc076257a2f2171e3ea9f42eb8191218c987-merged.mount: Deactivated successfully.
Nov 25 09:53:50 compute-1 podman[232208]: 2025-11-25 09:53:50.688935421 +0000 UTC m=+0.040569142 container remove b1d43a07869ef363b5536df3d09cfa3ca82ffb4b1e5c8c523eee9fb43bf2425b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:53:50 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:53:50 compute-1 ceph-mon[79643]: pgmap v678: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 1.2 KiB/s wr, 206 op/s
Nov 25 09:53:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:53:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:53:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:53:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:53:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:53:50 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Failed with result 'exit-code'.
Nov 25 09:53:50 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.038s CPU time.
Nov 25 09:53:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:51.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:52.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:52 compute-1 nova_compute[228683]: 2025-11-25 09:53:52.584 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:52 compute-1 ceph-mon[79643]: pgmap v679: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 1.2 KiB/s wr, 206 op/s
Nov 25 09:53:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:53.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:53 compute-1 sudo[232241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:53:53 compute-1 sudo[232241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:53 compute-1 sudo[232241]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 09:53:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/593949193' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:53:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 09:53:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/593949193' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:53:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:54.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:54 compute-1 ceph-mon[79643]: pgmap v680: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:53:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/593949193' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:53:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/593949193' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:53:54 compute-1 nova_compute[228683]: 2025-11-25 09:53:54.725 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:55.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095355 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:53:55 compute-1 podman[232267]: 2025-11-25 09:53:55.807062775 +0000 UTC m=+0.059951564 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 09:53:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:53:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:56.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:53:56 compute-1 ceph-mon[79643]: pgmap v681: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:53:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:57.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:57 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2293824702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:57 compute-1 nova_compute[228683]: 2025-11-25 09:53:57.585 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:58.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:58 compute-1 ceph-mon[79643]: pgmap v682: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:53:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:53:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:59.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:59 compute-1 nova_compute[228683]: 2025-11-25 09:53:59.726 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:54:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:00.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:54:00 compute-1 ceph-mon[79643]: pgmap v683: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:54:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:54:00 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Scheduled restart job, restart counter is at 9.
Nov 25 09:54:00 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:54:00 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.038s CPU time.
Nov 25 09:54:00 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:54:01 compute-1 podman[232293]: 2025-11-25 09:54:01.051946881 +0000 UTC m=+0.050857212 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:54:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.002000019s ======
Nov 25 09:54:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:01.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000019s
Nov 25 09:54:01 compute-1 podman[232350]: 2025-11-25 09:54:01.18206142 +0000 UTC m=+0.030785002 container create 1414ee295e652c56ad5b032d63ca5158421ccdea6f71124a8c14884b43c458dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 25 09:54:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bffdae5295bebf8043d7fc879b15ace735055cce44be68cb8a849c57045d316/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bffdae5295bebf8043d7fc879b15ace735055cce44be68cb8a849c57045d316/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bffdae5295bebf8043d7fc879b15ace735055cce44be68cb8a849c57045d316/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bffdae5295bebf8043d7fc879b15ace735055cce44be68cb8a849c57045d316/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.yfzsxe-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:01 compute-1 podman[232350]: 2025-11-25 09:54:01.225566875 +0000 UTC m=+0.074290477 container init 1414ee295e652c56ad5b032d63ca5158421ccdea6f71124a8c14884b43c458dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:54:01 compute-1 podman[232350]: 2025-11-25 09:54:01.22981401 +0000 UTC m=+0.078537592 container start 1414ee295e652c56ad5b032d63ca5158421ccdea6f71124a8c14884b43c458dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 25 09:54:01 compute-1 bash[232350]: 1414ee295e652c56ad5b032d63ca5158421ccdea6f71124a8c14884b43c458dd
Nov 25 09:54:01 compute-1 podman[232350]: 2025-11-25 09:54:01.168642239 +0000 UTC m=+0.017365841 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:54:01 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:54:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:01 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:54:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:01 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:54:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:01 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:54:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:01 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:54:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:01 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:54:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:01 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:54:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:01 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:54:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:01 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:54:01 compute-1 sudo[232405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:54:01 compute-1 sudo[232405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:01 compute-1 sudo[232405]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:02.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:02 compute-1 nova_compute[228683]: 2025-11-25 09:54:02.586 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:02 compute-1 ceph-mon[79643]: pgmap v684: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 09:54:02 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3719102998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:54:02 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/890083397' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:54:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:54:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:03.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:54:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:04.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:04 compute-1 ceph-mon[79643]: pgmap v685: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 25 09:54:04 compute-1 nova_compute[228683]: 2025-11-25 09:54:04.728 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:54:04.999 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:54:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:54:05.000 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:54:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:54:05.000 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:54:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:54:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:05.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:54:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:06.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:06 compute-1 ceph-mon[79643]: pgmap v686: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 25 09:54:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:07.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:07 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:54:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:07 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:54:07 compute-1 nova_compute[228683]: 2025-11-25 09:54:07.587 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:08.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:08 compute-1 ceph-mon[79643]: pgmap v687: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 84 op/s
Nov 25 09:54:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:09.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:09 compute-1 nova_compute[228683]: 2025-11-25 09:54:09.729 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:10.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:10 compute-1 ceph-mon[79643]: pgmap v688: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 84 op/s
Nov 25 09:54:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:11.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:12.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:12 compute-1 nova_compute[228683]: 2025-11-25 09:54:12.587 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:12 compute-1 ceph-mon[79643]: pgmap v689: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Nov 25 09:54:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:13.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:54:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:14 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:14.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:14 compute-1 ceph-mon[79643]: pgmap v690: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 25 09:54:14 compute-1 nova_compute[228683]: 2025-11-25 09:54:14.730 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:15.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:15 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6424001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:15 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:54:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:16 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:16.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:16 compute-1 ceph-mon[79643]: pgmap v691: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 25 09:54:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:17.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095417 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:54:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:17 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430001dd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:17 compute-1 nova_compute[228683]: 2025-11-25 09:54:17.587 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:17 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6424002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:17 compute-1 podman[232453]: 2025-11-25 09:54:17.783992038 +0000 UTC m=+0.038085782 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 09:54:17 compute-1 ovn_controller[133620]: 2025-11-25T09:54:17Z|00036|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Nov 25 09:54:17 compute-1 nova_compute[228683]: 2025-11-25 09:54:17.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:17 compute-1 nova_compute[228683]: 2025-11-25 09:54:17.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:17 compute-1 nova_compute[228683]: 2025-11-25 09:54:17.911 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:54:17 compute-1 nova_compute[228683]: 2025-11-25 09:54:17.911 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:54:17 compute-1 nova_compute[228683]: 2025-11-25 09:54:17.911 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:54:17 compute-1 nova_compute[228683]: 2025-11-25 09:54:17.911 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:54:17 compute-1 nova_compute[228683]: 2025-11-25 09:54:17.912 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:54:18 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:54:18 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4246624918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:18 compute-1 nova_compute[228683]: 2025-11-25 09:54:18.245 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:54:18 compute-1 nova_compute[228683]: 2025-11-25 09:54:18.448 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:54:18 compute-1 nova_compute[228683]: 2025-11-25 09:54:18.449 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4972MB free_disk=59.94289016723633GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:54:18 compute-1 nova_compute[228683]: 2025-11-25 09:54:18.449 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:54:18 compute-1 nova_compute[228683]: 2025-11-25 09:54:18.449 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:54:18 compute-1 nova_compute[228683]: 2025-11-25 09:54:18.496 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:54:18 compute-1 nova_compute[228683]: 2025-11-25 09:54:18.496 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:54:18 compute-1 nova_compute[228683]: 2025-11-25 09:54:18.511 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:54:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:18 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420002d80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:18.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:18 compute-1 ceph-mon[79643]: pgmap v692: 337 pgs: 337 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Nov 25 09:54:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/737909963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/4246624918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3923914555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:18 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:54:18 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4109264677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:18 compute-1 nova_compute[228683]: 2025-11-25 09:54:18.852 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:54:18 compute-1 nova_compute[228683]: 2025-11-25 09:54:18.856 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:54:18 compute-1 nova_compute[228683]: 2025-11-25 09:54:18.867 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:54:18 compute-1 nova_compute[228683]: 2025-11-25 09:54:18.880 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:54:18 compute-1 nova_compute[228683]: 2025-11-25 09:54:18.880 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:54:18 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:54:18 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2627551505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:19.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:19 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420002d80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:19 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430001dd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/4109264677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2627551505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:19 compute-1 nova_compute[228683]: 2025-11-25 09:54:19.732 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:19 compute-1 nova_compute[228683]: 2025-11-25 09:54:19.879 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:19 compute-1 nova_compute[228683]: 2025-11-25 09:54:19.880 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:54:19 compute-1 nova_compute[228683]: 2025-11-25 09:54:19.880 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:54:19 compute-1 nova_compute[228683]: 2025-11-25 09:54:19.902 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:54:19 compute-1 nova_compute[228683]: 2025-11-25 09:54:19.902 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:19 compute-1 nova_compute[228683]: 2025-11-25 09:54:19.902 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:19 compute-1 nova_compute[228683]: 2025-11-25 09:54:19.902 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:19 compute-1 nova_compute[228683]: 2025-11-25 09:54:19.902 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:54:19 compute-1 nova_compute[228683]: 2025-11-25 09:54:19.911 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:20 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6424002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:20.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:20 compute-1 ceph-mon[79643]: pgmap v693: 337 pgs: 337 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 09:54:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:21.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:21 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420002d80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:21 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420002d80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3694358255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/556558394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:21 compute-1 sudo[232517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:54:21 compute-1 sudo[232517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:21 compute-1 sudo[232517]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:21 compute-1 nova_compute[228683]: 2025-11-25 09:54:21.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:21 compute-1 nova_compute[228683]: 2025-11-25 09:54:21.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:22 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430001dd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:22.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:22 compute-1 nova_compute[228683]: 2025-11-25 09:54:22.589 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:22 compute-1 ceph-mon[79643]: pgmap v694: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 09:54:22 compute-1 nova_compute[228683]: 2025-11-25 09:54:22.889 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:23.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:23 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6424002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:23 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420002d80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:24 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:54:24.240 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:54:24 compute-1 nova_compute[228683]: 2025-11-25 09:54:24.240 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:24 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:54:24.241 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:54:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:24 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420002d80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:24.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:24 compute-1 ceph-mon[79643]: pgmap v695: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:54:24 compute-1 nova_compute[228683]: 2025-11-25 09:54:24.734 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:25.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:25 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64300091b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:25 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6424003bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:26 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004dd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:26.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:26 compute-1 ceph-mon[79643]: pgmap v696: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:54:26 compute-1 podman[232544]: 2025-11-25 09:54:26.799965141 +0000 UTC m=+0.055019558 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 09:54:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:27.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:27 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004dd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:27 compute-1 nova_compute[228683]: 2025-11-25 09:54:27.590 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:27 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64300091b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:28 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64300091b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:28.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:28 compute-1 ceph-mon[79643]: pgmap v697: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 09:54:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:29.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:29 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004dd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:29 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004dd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:29 compute-1 nova_compute[228683]: 2025-11-25 09:54:29.735 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:30 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6424003bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:30.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:30 compute-1 ceph-mon[79643]: pgmap v698: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 12 KiB/s wr, 0 op/s
Nov 25 09:54:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:54:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:31.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:31 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:31 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:31 compute-1 podman[232570]: 2025-11-25 09:54:31.788970005 +0000 UTC m=+0.042035357 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 09:54:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:32 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:54:32.242 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ad0cdb86-b3c6-44c6-a890-1db2efa57d2b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:54:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:32 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:32.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:32 compute-1 nova_compute[228683]: 2025-11-25 09:54:32.591 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:32 compute-1 ceph-mon[79643]: pgmap v699: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 14 KiB/s wr, 1 op/s
Nov 25 09:54:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095432 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:54:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:33.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:33 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:33 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:34 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64240048c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:34.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:34 compute-1 ceph-mon[79643]: pgmap v700: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 14 KiB/s wr, 0 op/s
Nov 25 09:54:34 compute-1 nova_compute[228683]: 2025-11-25 09:54:34.736 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:35.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:35 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:35 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:36 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:36.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:36 compute-1 ceph-mon[79643]: pgmap v701: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 14 KiB/s wr, 0 op/s
Nov 25 09:54:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:37.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:37 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c0023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:37 compute-1 nova_compute[228683]: 2025-11-25 09:54:37.592 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:37 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64240048c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:38 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c0023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:38.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:38 compute-1 ceph-mon[79643]: pgmap v702: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 15 KiB/s wr, 1 op/s
Nov 25 09:54:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:39.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:39 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64240048c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:39 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c0023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:39 compute-1 nova_compute[228683]: 2025-11-25 09:54:39.737 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:39 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/4111200644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:40 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:54:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:40 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64240048c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:40.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:40 compute-1 ceph-mon[79643]: pgmap v703: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Nov 25 09:54:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:41.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:41 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c0023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:41 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64240048c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:41 compute-1 sudo[232592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:54:41 compute-1 sudo[232592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:41 compute-1 sudo[232592]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:42 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c0023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:42.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:42 compute-1 nova_compute[228683]: 2025-11-25 09:54:42.593 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:42 compute-1 ceph-mon[79643]: pgmap v704: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 4.7 KiB/s wr, 30 op/s
Nov 25 09:54:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:43.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:43 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64240048c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:43 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:54:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:43 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:54:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:43 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:54:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:43 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:44 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:44.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:44 compute-1 nova_compute[228683]: 2025-11-25 09:54:44.738 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:44 compute-1 ceph-mon[79643]: pgmap v705: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.7 KiB/s wr, 30 op/s
Nov 25 09:54:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:45.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:45 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:45 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:54:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:46 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:54:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:46 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438001080 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:46.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:46 compute-1 ceph-mon[79643]: pgmap v706: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.7 KiB/s wr, 30 op/s
Nov 25 09:54:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:47.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:47 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64240048c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:47 compute-1 nova_compute[228683]: 2025-11-25 09:54:47.594 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:47 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:48 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:48.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:48 compute-1 podman[232622]: 2025-11-25 09:54:48.780968119 +0000 UTC m=+0.036137620 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 09:54:48 compute-1 ceph-mon[79643]: pgmap v707: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.1 KiB/s wr, 32 op/s
Nov 25 09:54:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:49.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:49 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438001bc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:49 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64240048c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:49 compute-1 nova_compute[228683]: 2025-11-25 09:54:49.739 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:50 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:50.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:50 compute-1 ceph-mon[79643]: pgmap v708: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 31 op/s
Nov 25 09:54:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:51.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:51 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64200052b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:51 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438001bc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:52 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64240048c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:52 compute-1 nova_compute[228683]: 2025-11-25 09:54:52.594 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:52.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095452 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:54:52 compute-1 ceph-mon[79643]: pgmap v709: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 31 op/s
Nov 25 09:54:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:53.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:53 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:53 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64200052d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:53 compute-1 sudo[232641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:54:53 compute-1 sudo[232641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:53 compute-1 sudo[232641]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:53 compute-1 sudo[232666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:54:53 compute-1 sudo[232666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 09:54:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3647834050' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:54:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 09:54:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3647834050' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:54:54 compute-1 sudo[232666]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:54 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438001bc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:54.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:54 compute-1 nova_compute[228683]: 2025-11-25 09:54:54.741 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:54 compute-1 ceph-mon[79643]: pgmap v710: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:54:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/3647834050' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:54:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/3647834050' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:54:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:54:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:54:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:54:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:54:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:54:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:54:54 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:54:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:55.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:55 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64240048c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:55 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.434 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "07e0280f-d3d7-48db-a9c4-01836517166c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.435 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.445 228687 DEBUG nova.compute.manager [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.491 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.492 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.496 228687 DEBUG nova.virt.hardware [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.497 228687 INFO nova.compute.claims [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Claim successful on node compute-1.ctlplane.example.com
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.566 228687 DEBUG oslo_concurrency.processutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:54:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:56 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64200052f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:56.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:56 compute-1 ceph-mon[79643]: pgmap v711: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:54:56 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:54:56 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1074496672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.898 228687 DEBUG oslo_concurrency.processutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.902 228687 DEBUG nova.compute.provider_tree [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.914 228687 DEBUG nova.scheduler.client.report [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.928 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.929 228687 DEBUG nova.compute.manager [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.974 228687 DEBUG nova.compute.manager [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.975 228687 DEBUG nova.network.neutron [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:54:56 compute-1 nova_compute[228683]: 2025-11-25 09:54:56.989 228687 INFO nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.001 228687 DEBUG nova.compute.manager [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.071 228687 DEBUG nova.compute.manager [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.072 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.072 228687 INFO nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Creating image(s)
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.090 228687 DEBUG nova.storage.rbd_utils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 07e0280f-d3d7-48db-a9c4-01836517166c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.107 228687 DEBUG nova.storage.rbd_utils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 07e0280f-d3d7-48db-a9c4-01836517166c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.123 228687 DEBUG nova.storage.rbd_utils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 07e0280f-d3d7-48db-a9c4-01836517166c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.125 228687 DEBUG oslo_concurrency.processutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.171 228687 DEBUG oslo_concurrency.processutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.172 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.173 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.173 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.189 228687 DEBUG nova.storage.rbd_utils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 07e0280f-d3d7-48db-a9c4-01836517166c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.191 228687 DEBUG oslo_concurrency.processutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 07e0280f-d3d7-48db-a9c4-01836517166c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:54:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:57.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.205 228687 DEBUG nova.policy [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c92fada0e9fc4e9482d24b33b311d806', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:54:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:57 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438003030 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.324 228687 DEBUG oslo_concurrency.processutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 07e0280f-d3d7-48db-a9c4-01836517166c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.373 228687 DEBUG nova.storage.rbd_utils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] resizing rbd image 07e0280f-d3d7-48db-a9c4-01836517166c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.435 228687 DEBUG nova.objects.instance [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'migration_context' on Instance uuid 07e0280f-d3d7-48db-a9c4-01836517166c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.448 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.448 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Ensure instance console log exists: /var/lib/nova/instances/07e0280f-d3d7-48db-a9c4-01836517166c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.449 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.449 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.450 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:54:57 compute-1 sudo[232893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:54:57 compute-1 sudo[232893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:57 compute-1 sudo[232893]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:57 compute-1 podman[232935]: 2025-11-25 09:54:57.524520146 +0000 UTC m=+0.055300813 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.595 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:57 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64240048c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:57 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1074496672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:57 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:54:57 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:54:57 compute-1 nova_compute[228683]: 2025-11-25 09:54:57.821 228687 DEBUG nova.network.neutron [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Successfully created port: 2706a828-75ff-4ea1-835e-f5308d75c14a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:54:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:58 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:58.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:58 compute-1 nova_compute[228683]: 2025-11-25 09:54:58.793 228687 DEBUG nova.network.neutron [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Successfully updated port: 2706a828-75ff-4ea1-835e-f5308d75c14a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:54:58 compute-1 nova_compute[228683]: 2025-11-25 09:54:58.817 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-07e0280f-d3d7-48db-a9c4-01836517166c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:54:58 compute-1 nova_compute[228683]: 2025-11-25 09:54:58.818 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-07e0280f-d3d7-48db-a9c4-01836517166c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:54:58 compute-1 nova_compute[228683]: 2025-11-25 09:54:58.818 228687 DEBUG nova.network.neutron [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:54:58 compute-1 ceph-mon[79643]: pgmap v712: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:54:58 compute-1 nova_compute[228683]: 2025-11-25 09:54:58.907 228687 DEBUG nova.compute.manager [req-ca882fd8-3931-4459-a309-97df086a3a1b req-a30481ac-8df8-4452-bc58-d6778d3f304a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Received event network-changed-2706a828-75ff-4ea1-835e-f5308d75c14a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:54:58 compute-1 nova_compute[228683]: 2025-11-25 09:54:58.908 228687 DEBUG nova.compute.manager [req-ca882fd8-3931-4459-a309-97df086a3a1b req-a30481ac-8df8-4452-bc58-d6778d3f304a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Refreshing instance network info cache due to event network-changed-2706a828-75ff-4ea1-835e-f5308d75c14a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:54:58 compute-1 nova_compute[228683]: 2025-11-25 09:54:58.908 228687 DEBUG oslo_concurrency.lockutils [req-ca882fd8-3931-4459-a309-97df086a3a1b req-a30481ac-8df8-4452-bc58-d6778d3f304a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-07e0280f-d3d7-48db-a9c4-01836517166c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:54:58 compute-1 nova_compute[228683]: 2025-11-25 09:54:58.950 228687 DEBUG nova.network.neutron [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:54:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:54:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:59.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:59 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420005310 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:54:59 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438003030 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:59 compute-1 nova_compute[228683]: 2025-11-25 09:54:59.742 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:00 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64240048c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:00.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:00 compute-1 ceph-mon[79643]: pgmap v713: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:55:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:55:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:01.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.217 228687 DEBUG nova.network.neutron [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Updating instance_info_cache with network_info: [{"id": "2706a828-75ff-4ea1-835e-f5308d75c14a", "address": "fa:16:3e:c5:15:b6", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2706a828-75", "ovs_interfaceid": "2706a828-75ff-4ea1-835e-f5308d75c14a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.236 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-07e0280f-d3d7-48db-a9c4-01836517166c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.236 228687 DEBUG nova.compute.manager [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Instance network_info: |[{"id": "2706a828-75ff-4ea1-835e-f5308d75c14a", "address": "fa:16:3e:c5:15:b6", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2706a828-75", "ovs_interfaceid": "2706a828-75ff-4ea1-835e-f5308d75c14a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.237 228687 DEBUG oslo_concurrency.lockutils [req-ca882fd8-3931-4459-a309-97df086a3a1b req-a30481ac-8df8-4452-bc58-d6778d3f304a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-07e0280f-d3d7-48db-a9c4-01836517166c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.237 228687 DEBUG nova.network.neutron [req-ca882fd8-3931-4459-a309-97df086a3a1b req-a30481ac-8df8-4452-bc58-d6778d3f304a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Refreshing network info cache for port 2706a828-75ff-4ea1-835e-f5308d75c14a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.239 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Start _get_guest_xml network_info=[{"id": "2706a828-75ff-4ea1-835e-f5308d75c14a", "address": "fa:16:3e:c5:15:b6", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2706a828-75", "ovs_interfaceid": "2706a828-75ff-4ea1-835e-f5308d75c14a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'boot_index': 0, 'device_name': '/dev/vda', 'image_id': '62ddd1b7-1bba-493e-a10f-b03a12ab3457'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.242 228687 WARNING nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.244 228687 DEBUG nova.virt.libvirt.host [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.245 228687 DEBUG nova.virt.libvirt.host [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:55:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:01 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.249 228687 DEBUG nova.virt.libvirt.host [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.249 228687 DEBUG nova.virt.libvirt.host [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.250 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.250 228687 DEBUG nova.virt.hardware [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T09:51:47Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d76f382e-b0e4-4c25-9fed-0129b4e3facf',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.250 228687 DEBUG nova.virt.hardware [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.250 228687 DEBUG nova.virt.hardware [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.250 228687 DEBUG nova.virt.hardware [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.250 228687 DEBUG nova.virt.hardware [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.251 228687 DEBUG nova.virt.hardware [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.251 228687 DEBUG nova.virt.hardware [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.251 228687 DEBUG nova.virt.hardware [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.251 228687 DEBUG nova.virt.hardware [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.251 228687 DEBUG nova.virt.hardware [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.251 228687 DEBUG nova.virt.hardware [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.253 228687 DEBUG oslo_concurrency.processutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:01 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:55:01 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1480591424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.577 228687 DEBUG oslo_concurrency.processutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.592 228687 DEBUG nova.storage.rbd_utils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 07e0280f-d3d7-48db-a9c4-01836517166c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.595 228687 DEBUG oslo_concurrency.processutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:01 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420005cd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:01 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1480591424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:55:01 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:55:01 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3723789558' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:55:01 compute-1 sudo[233021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:55:01 compute-1 sudo[233021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:55:01 compute-1 sudo[233021]: pam_unix(sudo:session): session closed for user root
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.941 228687 DEBUG oslo_concurrency.processutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.942 228687 DEBUG nova.virt.libvirt.vif [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:54:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1502691703',display_name='tempest-TestNetworkBasicOps-server-1502691703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1502691703',id=4,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOeyGwyncFURVvM+9ayfAnXNf9GI1br1ZEr3Cec7qxPaAGJ0uLMok0qr7FCAA2bcXAfJWXqJKIDoOOo5jOb/vKN2AnGmZWeaehzRLzEzyVtWlX9r830132IYt/QQXy8Zjw==',key_name='tempest-TestNetworkBasicOps-21893871',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-2e0q5mpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:54:57Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=07e0280f-d3d7-48db-a9c4-01836517166c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2706a828-75ff-4ea1-835e-f5308d75c14a", "address": "fa:16:3e:c5:15:b6", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2706a828-75", "ovs_interfaceid": "2706a828-75ff-4ea1-835e-f5308d75c14a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.942 228687 DEBUG nova.network.os_vif_util [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "2706a828-75ff-4ea1-835e-f5308d75c14a", "address": "fa:16:3e:c5:15:b6", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2706a828-75", "ovs_interfaceid": "2706a828-75ff-4ea1-835e-f5308d75c14a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.943 228687 DEBUG nova.network.os_vif_util [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:15:b6,bridge_name='br-int',has_traffic_filtering=True,id=2706a828-75ff-4ea1-835e-f5308d75c14a,network=Network(ed91d6bf-56aa-4e17-a7ca-48f04cae081d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2706a828-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.944 228687 DEBUG nova.objects.instance [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'pci_devices' on Instance uuid 07e0280f-d3d7-48db-a9c4-01836517166c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.957 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:55:01 compute-1 nova_compute[228683]:   <uuid>07e0280f-d3d7-48db-a9c4-01836517166c</uuid>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   <name>instance-00000004</name>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   <memory>131072</memory>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   <vcpu>1</vcpu>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   <metadata>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <nova:name>tempest-TestNetworkBasicOps-server-1502691703</nova:name>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <nova:creationTime>2025-11-25 09:55:01</nova:creationTime>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <nova:flavor name="m1.nano">
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <nova:memory>128</nova:memory>
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <nova:disk>1</nova:disk>
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <nova:swap>0</nova:swap>
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       </nova:flavor>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <nova:owner>
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       </nova:owner>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <nova:ports>
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <nova:port uuid="2706a828-75ff-4ea1-835e-f5308d75c14a">
Nov 25 09:55:01 compute-1 nova_compute[228683]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:         </nova:port>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       </nova:ports>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     </nova:instance>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   </metadata>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   <sysinfo type="smbios">
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <system>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <entry name="serial">07e0280f-d3d7-48db-a9c4-01836517166c</entry>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <entry name="uuid">07e0280f-d3d7-48db-a9c4-01836517166c</entry>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     </system>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   </sysinfo>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   <os>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <boot dev="hd"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <smbios mode="sysinfo"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   </os>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   <features>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <acpi/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <apic/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <vmcoreinfo/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   </features>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   <clock offset="utc">
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <timer name="hpet" present="no"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   </clock>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   <cpu mode="host-model" match="exact">
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   </cpu>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   <devices>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <disk type="network" device="disk">
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <driver type="raw" cache="none"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <source protocol="rbd" name="vms/07e0280f-d3d7-48db-a9c4-01836517166c_disk">
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       </source>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <auth username="openstack">
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       </auth>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <target dev="vda" bus="virtio"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     </disk>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <disk type="network" device="cdrom">
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <driver type="raw" cache="none"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <source protocol="rbd" name="vms/07e0280f-d3d7-48db-a9c4-01836517166c_disk.config">
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       </source>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <auth username="openstack">
Nov 25 09:55:01 compute-1 nova_compute[228683]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       </auth>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <target dev="sda" bus="sata"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     </disk>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <interface type="ethernet">
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <mac address="fa:16:3e:c5:15:b6"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <model type="virtio"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <mtu size="1442"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <target dev="tap2706a828-75"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     </interface>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <serial type="pty">
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <log file="/var/lib/nova/instances/07e0280f-d3d7-48db-a9c4-01836517166c/console.log" append="off"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     </serial>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <video>
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <model type="virtio"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     </video>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <input type="tablet" bus="usb"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <rng model="virtio">
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     </rng>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <controller type="usb" index="0"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     <memballoon model="virtio">
Nov 25 09:55:01 compute-1 nova_compute[228683]:       <stats period="10"/>
Nov 25 09:55:01 compute-1 nova_compute[228683]:     </memballoon>
Nov 25 09:55:01 compute-1 nova_compute[228683]:   </devices>
Nov 25 09:55:01 compute-1 nova_compute[228683]: </domain>
Nov 25 09:55:01 compute-1 nova_compute[228683]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.957 228687 DEBUG nova.compute.manager [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Preparing to wait for external event network-vif-plugged-2706a828-75ff-4ea1-835e-f5308d75c14a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.957 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.958 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.958 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.959 228687 DEBUG nova.virt.libvirt.vif [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:54:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1502691703',display_name='tempest-TestNetworkBasicOps-server-1502691703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1502691703',id=4,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOeyGwyncFURVvM+9ayfAnXNf9GI1br1ZEr3Cec7qxPaAGJ0uLMok0qr7FCAA2bcXAfJWXqJKIDoOOo5jOb/vKN2AnGmZWeaehzRLzEzyVtWlX9r830132IYt/QQXy8Zjw==',key_name='tempest-TestNetworkBasicOps-21893871',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-2e0q5mpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:54:57Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=07e0280f-d3d7-48db-a9c4-01836517166c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2706a828-75ff-4ea1-835e-f5308d75c14a", "address": "fa:16:3e:c5:15:b6", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2706a828-75", "ovs_interfaceid": "2706a828-75ff-4ea1-835e-f5308d75c14a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.959 228687 DEBUG nova.network.os_vif_util [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "2706a828-75ff-4ea1-835e-f5308d75c14a", "address": "fa:16:3e:c5:15:b6", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2706a828-75", "ovs_interfaceid": "2706a828-75ff-4ea1-835e-f5308d75c14a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.959 228687 DEBUG nova.network.os_vif_util [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:15:b6,bridge_name='br-int',has_traffic_filtering=True,id=2706a828-75ff-4ea1-835e-f5308d75c14a,network=Network(ed91d6bf-56aa-4e17-a7ca-48f04cae081d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2706a828-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.959 228687 DEBUG os_vif [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:15:b6,bridge_name='br-int',has_traffic_filtering=True,id=2706a828-75ff-4ea1-835e-f5308d75c14a,network=Network(ed91d6bf-56aa-4e17-a7ca-48f04cae081d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2706a828-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.960 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.960 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.961 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.962 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.963 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2706a828-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.963 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2706a828-75, col_values=(('external_ids', {'iface-id': '2706a828-75ff-4ea1-835e-f5308d75c14a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:15:b6', 'vm-uuid': '07e0280f-d3d7-48db-a9c4-01836517166c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.964 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:01 compute-1 NetworkManager[48856]: <info>  [1764064501.9649] manager: (tap2706a828-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.966 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.969 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:01 compute-1 podman[233045]: 2025-11-25 09:55:01.969612126 +0000 UTC m=+0.043820585 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 09:55:01 compute-1 nova_compute[228683]: 2025-11-25 09:55:01.970 228687 INFO os_vif [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:15:b6,bridge_name='br-int',has_traffic_filtering=True,id=2706a828-75ff-4ea1-835e-f5308d75c14a,network=Network(ed91d6bf-56aa-4e17-a7ca-48f04cae081d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2706a828-75')
Nov 25 09:55:02 compute-1 nova_compute[228683]: 2025-11-25 09:55:02.003 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:55:02 compute-1 nova_compute[228683]: 2025-11-25 09:55:02.003 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:55:02 compute-1 nova_compute[228683]: 2025-11-25 09:55:02.003 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:c5:15:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:55:02 compute-1 nova_compute[228683]: 2025-11-25 09:55:02.003 228687 INFO nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Using config drive
Nov 25 09:55:02 compute-1 nova_compute[228683]: 2025-11-25 09:55:02.019 228687 DEBUG nova.storage.rbd_utils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 07e0280f-d3d7-48db-a9c4-01836517166c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:55:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:02 compute-1 nova_compute[228683]: 2025-11-25 09:55:02.305 228687 DEBUG nova.network.neutron [req-ca882fd8-3931-4459-a309-97df086a3a1b req-a30481ac-8df8-4452-bc58-d6778d3f304a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Updated VIF entry in instance network info cache for port 2706a828-75ff-4ea1-835e-f5308d75c14a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:55:02 compute-1 nova_compute[228683]: 2025-11-25 09:55:02.306 228687 DEBUG nova.network.neutron [req-ca882fd8-3931-4459-a309-97df086a3a1b req-a30481ac-8df8-4452-bc58-d6778d3f304a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Updating instance_info_cache with network_info: [{"id": "2706a828-75ff-4ea1-835e-f5308d75c14a", "address": "fa:16:3e:c5:15:b6", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2706a828-75", "ovs_interfaceid": "2706a828-75ff-4ea1-835e-f5308d75c14a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:55:02 compute-1 nova_compute[228683]: 2025-11-25 09:55:02.318 228687 DEBUG oslo_concurrency.lockutils [req-ca882fd8-3931-4459-a309-97df086a3a1b req-a30481ac-8df8-4452-bc58-d6778d3f304a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-07e0280f-d3d7-48db-a9c4-01836517166c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:55:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:02 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438003030 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:02.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:02 compute-1 ceph-mon[79643]: pgmap v714: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:55:02 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3723789558' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.164 228687 INFO nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Creating config drive at /var/lib/nova/instances/07e0280f-d3d7-48db-a9c4-01836517166c/disk.config
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.168 228687 DEBUG oslo_concurrency.processutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/07e0280f-d3d7-48db-a9c4-01836517166c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8_dytww1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:03.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:03 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438003030 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.284 228687 DEBUG oslo_concurrency.processutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/07e0280f-d3d7-48db-a9c4-01836517166c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8_dytww1" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.303 228687 DEBUG nova.storage.rbd_utils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 07e0280f-d3d7-48db-a9c4-01836517166c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.305 228687 DEBUG oslo_concurrency.processutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/07e0280f-d3d7-48db-a9c4-01836517166c/disk.config 07e0280f-d3d7-48db-a9c4-01836517166c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.379 228687 DEBUG oslo_concurrency.processutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/07e0280f-d3d7-48db-a9c4-01836517166c/disk.config 07e0280f-d3d7-48db-a9c4-01836517166c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.380 228687 INFO nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Deleting local config drive /var/lib/nova/instances/07e0280f-d3d7-48db-a9c4-01836517166c/disk.config because it was imported into RBD.
Nov 25 09:55:03 compute-1 NetworkManager[48856]: <info>  [1764064503.4120] manager: (tap2706a828-75): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Nov 25 09:55:03 compute-1 kernel: tap2706a828-75: entered promiscuous mode
Nov 25 09:55:03 compute-1 ovn_controller[133620]: 2025-11-25T09:55:03Z|00037|binding|INFO|Claiming lport 2706a828-75ff-4ea1-835e-f5308d75c14a for this chassis.
Nov 25 09:55:03 compute-1 ovn_controller[133620]: 2025-11-25T09:55:03Z|00038|binding|INFO|2706a828-75ff-4ea1-835e-f5308d75c14a: Claiming fa:16:3e:c5:15:b6 10.100.0.10
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.415 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.421 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.425 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:15:b6 10.100.0.10'], port_security=['fa:16:3e:c5:15:b6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '07e0280f-d3d7-48db-a9c4-01836517166c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cdb4286d-2e7f-469b-b9a1-b5502a1f3b7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6eca7baa-8b78-4122-aef9-182609bf4892, chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>], logical_port=2706a828-75ff-4ea1-835e-f5308d75c14a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.425 142940 INFO neutron.agent.ovn.metadata.agent [-] Port 2706a828-75ff-4ea1-835e-f5308d75c14a in datapath ed91d6bf-56aa-4e17-a7ca-48f04cae081d bound to our chassis
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.427 142940 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed91d6bf-56aa-4e17-a7ca-48f04cae081d
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.436 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[83e47fea-dbe7-4e2f-b758-bbe36c697f05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.437 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH taped91d6bf-51 in ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:55:03 compute-1 systemd-udevd[233140]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.439 231684 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface taped91d6bf-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.439 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[07d6e7c7-6389-400e-a20d-315ddd7a9b16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.440 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[703530a3-fa1e-471c-8a21-da55d03fc62e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 systemd-machined[192680]: New machine qemu-2-instance-00000004.
Nov 25 09:55:03 compute-1 NetworkManager[48856]: <info>  [1764064503.4501] device (tap2706a828-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:55:03 compute-1 systemd[1]: Started Virtual Machine qemu-2-instance-00000004.
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.450 143047 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd0f8d8-f072-420a-a1f9-dc60f0e46ebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 NetworkManager[48856]: <info>  [1764064503.4547] device (tap2706a828-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.470 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[034c0a55-85a9-412d-86da-c15cc55f48d2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 ovn_controller[133620]: 2025-11-25T09:55:03Z|00039|binding|INFO|Setting lport 2706a828-75ff-4ea1-835e-f5308d75c14a ovn-installed in OVS
Nov 25 09:55:03 compute-1 ovn_controller[133620]: 2025-11-25T09:55:03Z|00040|binding|INFO|Setting lport 2706a828-75ff-4ea1-835e-f5308d75c14a up in Southbound
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.490 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[29a50da6-8397-41d6-a1ac-a3991f0b5eb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.492 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.496 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[f736fc1a-6a13-4347-a854-ed025345b732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 NetworkManager[48856]: <info>  [1764064503.4974] manager: (taped91d6bf-50): new Veth device (/org/freedesktop/NetworkManager/Devices/34)
Nov 25 09:55:03 compute-1 systemd-udevd[233143]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.522 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[64294601-cf58-4cc9-9dd7-9af1ac00dbb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.524 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[f5077671-e852-41d1-87b9-5cd33399bfbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 NetworkManager[48856]: <info>  [1764064503.5402] device (taped91d6bf-50): carrier: link connected
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.546 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[9e99da69-3864-4106-bdfb-3920c06a0c5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.561 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[59ebdb12-b117-4fa8-8173-bba371e92c3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped91d6bf-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:18:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 328808, 'reachable_time': 38753, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 233166, 'error': None, 'target': 'ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.573 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[877d2ce7-ec8d-401c-8335-56264166c6a5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:1845'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 328808, 'tstamp': 328808}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 233168, 'error': None, 'target': 'ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.586 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[4130f207-93f7-4500-acb4-a20295dfe279]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped91d6bf-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:18:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 328808, 'reachable_time': 38753, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 233169, 'error': None, 'target': 'ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.608 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[5558b951-f7a0-4cd1-84da-43926bb713ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.646 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3a1f1d-4d9d-4d5b-a87e-cc78bdcd54f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.647 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped91d6bf-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.648 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.648 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped91d6bf-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:03 compute-1 kernel: taped91d6bf-50: entered promiscuous mode
Nov 25 09:55:03 compute-1 NetworkManager[48856]: <info>  [1764064503.6504] manager: (taped91d6bf-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.650 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.653 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped91d6bf-50, col_values=(('external_ids', {'iface-id': '1bce7dcb-6145-475b-bb44-6a2b9bd7cbf1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.655 228687 DEBUG nova.compute.manager [req-8d242e5b-6a7f-4007-a8cd-38cc0bd5e798 req-dbec5164-5f82-463c-be9d-8ed7a5cecae2 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Received event network-vif-plugged-2706a828-75ff-4ea1-835e-f5308d75c14a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.656 228687 DEBUG oslo_concurrency.lockutils [req-8d242e5b-6a7f-4007-a8cd-38cc0bd5e798 req-dbec5164-5f82-463c-be9d-8ed7a5cecae2 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.656 228687 DEBUG oslo_concurrency.lockutils [req-8d242e5b-6a7f-4007-a8cd-38cc0bd5e798 req-dbec5164-5f82-463c-be9d-8ed7a5cecae2 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.656 228687 DEBUG oslo_concurrency.lockutils [req-8d242e5b-6a7f-4007-a8cd-38cc0bd5e798 req-dbec5164-5f82-463c-be9d-8ed7a5cecae2 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.657 228687 DEBUG nova.compute.manager [req-8d242e5b-6a7f-4007-a8cd-38cc0bd5e798 req-dbec5164-5f82-463c-be9d-8ed7a5cecae2 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Processing event network-vif-plugged-2706a828-75ff-4ea1-835e-f5308d75c14a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.658 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:03 compute-1 ovn_controller[133620]: 2025-11-25T09:55:03Z|00041|binding|INFO|Releasing lport 1bce7dcb-6145-475b-bb44-6a2b9bd7cbf1 from this chassis (sb_readonly=0)
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.660 142940 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ed91d6bf-56aa-4e17-a7ca-48f04cae081d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ed91d6bf-56aa-4e17-a7ca-48f04cae081d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.661 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[e8aed469-4d78-478f-9ea3-bc4b79899752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.661 142940 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: global
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     log         /dev/log local0 debug
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     log-tag     haproxy-metadata-proxy-ed91d6bf-56aa-4e17-a7ca-48f04cae081d
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     user        root
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     group       root
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     maxconn     1024
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     pidfile     /var/lib/neutron/external/pids/ed91d6bf-56aa-4e17-a7ca-48f04cae081d.pid.haproxy
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     daemon
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: defaults
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     log global
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     mode http
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     option httplog
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     option dontlognull
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     option http-server-close
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     option forwardfor
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     retries                 3
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     timeout http-request    30s
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     timeout connect         30s
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     timeout client          32s
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     timeout server          32s
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     timeout http-keep-alive 30s
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: listen listener
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     bind 169.254.169.254:80
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:     http-request add-header X-OVN-Network-ID ed91d6bf-56aa-4e17-a7ca-48f04cae081d
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:55:03 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:03.663 142940 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'env', 'PROCESS_TAG=haproxy-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ed91d6bf-56aa-4e17-a7ca-48f04cae081d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:55:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:03 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.671 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.945 228687 DEBUG nova.virt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Emitting event <LifecycleEvent: 1764064503.9452748, 07e0280f-d3d7-48db-a9c4-01836517166c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.946 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] VM Started (Lifecycle Event)
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.948 228687 DEBUG nova.compute.manager [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.952 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.955 228687 INFO nova.virt.libvirt.driver [-] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Instance spawned successfully.
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.955 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.965 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.969 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.972 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.972 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.972 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.972 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.973 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:55:03 compute-1 nova_compute[228683]: 2025-11-25 09:55:03.973 228687 DEBUG nova.virt.libvirt.driver [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:55:03 compute-1 podman[233240]: 2025-11-25 09:55:03.977018068 +0000 UTC m=+0.041725457 container create 9eb98422e4c97876ea664b1f2c997f132c455d41eb4d92fc249ad3116a1d3989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:55:03 compute-1 systemd[1]: Started libpod-conmon-9eb98422e4c97876ea664b1f2c997f132c455d41eb4d92fc249ad3116a1d3989.scope.
Nov 25 09:55:04 compute-1 nova_compute[228683]: 2025-11-25 09:55:04.014 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:55:04 compute-1 nova_compute[228683]: 2025-11-25 09:55:04.014 228687 DEBUG nova.virt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Emitting event <LifecycleEvent: 1764064503.945433, 07e0280f-d3d7-48db-a9c4-01836517166c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:55:04 compute-1 nova_compute[228683]: 2025-11-25 09:55:04.014 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] VM Paused (Lifecycle Event)
Nov 25 09:55:04 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:55:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c61b8d763157fabeb1b45f4a1fdabd67b04b2892d31488fc7bbc343f8671d6a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:55:04 compute-1 podman[233240]: 2025-11-25 09:55:04.028558422 +0000 UTC m=+0.093265831 container init 9eb98422e4c97876ea664b1f2c997f132c455d41eb4d92fc249ad3116a1d3989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:55:04 compute-1 podman[233240]: 2025-11-25 09:55:04.033471322 +0000 UTC m=+0.098178712 container start 9eb98422e4c97876ea664b1f2c997f132c455d41eb4d92fc249ad3116a1d3989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:55:04 compute-1 nova_compute[228683]: 2025-11-25 09:55:04.034 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:55:04 compute-1 podman[233240]: 2025-11-25 09:55:03.962183096 +0000 UTC m=+0.026890505 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:55:04 compute-1 nova_compute[228683]: 2025-11-25 09:55:04.036 228687 DEBUG nova.virt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Emitting event <LifecycleEvent: 1764064503.950289, 07e0280f-d3d7-48db-a9c4-01836517166c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:55:04 compute-1 nova_compute[228683]: 2025-11-25 09:55:04.037 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] VM Resumed (Lifecycle Event)
Nov 25 09:55:04 compute-1 nova_compute[228683]: 2025-11-25 09:55:04.044 228687 INFO nova.compute.manager [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Took 6.97 seconds to spawn the instance on the hypervisor.
Nov 25 09:55:04 compute-1 nova_compute[228683]: 2025-11-25 09:55:04.045 228687 DEBUG nova.compute.manager [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:55:04 compute-1 neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d[233252]: [NOTICE]   (233256) : New worker (233258) forked
Nov 25 09:55:04 compute-1 neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d[233252]: [NOTICE]   (233256) : Loading success.
Nov 25 09:55:04 compute-1 nova_compute[228683]: 2025-11-25 09:55:04.051 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:55:04 compute-1 nova_compute[228683]: 2025-11-25 09:55:04.052 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:55:04 compute-1 nova_compute[228683]: 2025-11-25 09:55:04.070 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:55:04 compute-1 nova_compute[228683]: 2025-11-25 09:55:04.088 228687 INFO nova.compute.manager [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Took 7.62 seconds to build instance.
Nov 25 09:55:04 compute-1 nova_compute[228683]: 2025-11-25 09:55:04.097 228687 DEBUG oslo_concurrency.lockutils [None req-cc8475b6-d4ca-4e5c-95da-3cf37c5ac19b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:04 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420005cd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:04.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:04 compute-1 nova_compute[228683]: 2025-11-25 09:55:04.743 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:04 compute-1 ceph-mon[79643]: pgmap v715: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:55:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:05.000 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:05.000 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:05.001 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:05.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:05 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:05 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438003030 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:05 compute-1 nova_compute[228683]: 2025-11-25 09:55:05.733 228687 DEBUG nova.compute.manager [req-c2ad8deb-5172-4434-b48b-77ffd9e7a8ac req-8a4fa022-c3ac-476d-af3b-c9fae19d2011 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Received event network-vif-plugged-2706a828-75ff-4ea1-835e-f5308d75c14a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:55:05 compute-1 nova_compute[228683]: 2025-11-25 09:55:05.734 228687 DEBUG oslo_concurrency.lockutils [req-c2ad8deb-5172-4434-b48b-77ffd9e7a8ac req-8a4fa022-c3ac-476d-af3b-c9fae19d2011 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:05 compute-1 nova_compute[228683]: 2025-11-25 09:55:05.734 228687 DEBUG oslo_concurrency.lockutils [req-c2ad8deb-5172-4434-b48b-77ffd9e7a8ac req-8a4fa022-c3ac-476d-af3b-c9fae19d2011 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:05 compute-1 nova_compute[228683]: 2025-11-25 09:55:05.734 228687 DEBUG oslo_concurrency.lockutils [req-c2ad8deb-5172-4434-b48b-77ffd9e7a8ac req-8a4fa022-c3ac-476d-af3b-c9fae19d2011 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:05 compute-1 nova_compute[228683]: 2025-11-25 09:55:05.735 228687 DEBUG nova.compute.manager [req-c2ad8deb-5172-4434-b48b-77ffd9e7a8ac req-8a4fa022-c3ac-476d-af3b-c9fae19d2011 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] No waiting events found dispatching network-vif-plugged-2706a828-75ff-4ea1-835e-f5308d75c14a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:55:05 compute-1 nova_compute[228683]: 2025-11-25 09:55:05.735 228687 WARNING nova.compute.manager [req-c2ad8deb-5172-4434-b48b-77ffd9e7a8ac req-8a4fa022-c3ac-476d-af3b-c9fae19d2011 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Received unexpected event network-vif-plugged-2706a828-75ff-4ea1-835e-f5308d75c14a for instance with vm_state active and task_state None.
Nov 25 09:55:06 compute-1 NetworkManager[48856]: <info>  [1764064506.3402] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 25 09:55:06 compute-1 NetworkManager[48856]: <info>  [1764064506.3409] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 25 09:55:06 compute-1 nova_compute[228683]: 2025-11-25 09:55:06.341 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:06 compute-1 ovn_controller[133620]: 2025-11-25T09:55:06Z|00042|binding|INFO|Releasing lport 1bce7dcb-6145-475b-bb44-6a2b9bd7cbf1 from this chassis (sb_readonly=0)
Nov 25 09:55:06 compute-1 ovn_controller[133620]: 2025-11-25T09:55:06Z|00043|binding|INFO|Releasing lport 1bce7dcb-6145-475b-bb44-6a2b9bd7cbf1 from this chassis (sb_readonly=0)
Nov 25 09:55:06 compute-1 nova_compute[228683]: 2025-11-25 09:55:06.376 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:06 compute-1 nova_compute[228683]: 2025-11-25 09:55:06.380 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:06 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438003030 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:06.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:06 compute-1 ceph-mon[79643]: pgmap v716: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:55:06 compute-1 nova_compute[228683]: 2025-11-25 09:55:06.965 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:07.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:07 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420005cd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:07 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:07 compute-1 nova_compute[228683]: 2025-11-25 09:55:07.812 228687 DEBUG nova.compute.manager [req-1ef597e7-855d-45fa-bf1a-ac4424b64a6d req-4a5b91e7-caea-4cac-92b1-1eec31dc2c6d c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Received event network-changed-2706a828-75ff-4ea1-835e-f5308d75c14a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:55:07 compute-1 nova_compute[228683]: 2025-11-25 09:55:07.813 228687 DEBUG nova.compute.manager [req-1ef597e7-855d-45fa-bf1a-ac4424b64a6d req-4a5b91e7-caea-4cac-92b1-1eec31dc2c6d c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Refreshing instance network info cache due to event network-changed-2706a828-75ff-4ea1-835e-f5308d75c14a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:55:07 compute-1 nova_compute[228683]: 2025-11-25 09:55:07.813 228687 DEBUG oslo_concurrency.lockutils [req-1ef597e7-855d-45fa-bf1a-ac4424b64a6d req-4a5b91e7-caea-4cac-92b1-1eec31dc2c6d c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-07e0280f-d3d7-48db-a9c4-01836517166c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:55:07 compute-1 nova_compute[228683]: 2025-11-25 09:55:07.814 228687 DEBUG oslo_concurrency.lockutils [req-1ef597e7-855d-45fa-bf1a-ac4424b64a6d req-4a5b91e7-caea-4cac-92b1-1eec31dc2c6d c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-07e0280f-d3d7-48db-a9c4-01836517166c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:55:07 compute-1 nova_compute[228683]: 2025-11-25 09:55:07.814 228687 DEBUG nova.network.neutron [req-1ef597e7-855d-45fa-bf1a-ac4424b64a6d req-4a5b91e7-caea-4cac-92b1-1eec31dc2c6d c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Refreshing network info cache for port 2706a828-75ff-4ea1-835e-f5308d75c14a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:55:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:08 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438003030 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:08.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:08 compute-1 ceph-mon[79643]: pgmap v717: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:55:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:09.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:09 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438003030 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:09 compute-1 nova_compute[228683]: 2025-11-25 09:55:09.290 228687 DEBUG nova.network.neutron [req-1ef597e7-855d-45fa-bf1a-ac4424b64a6d req-4a5b91e7-caea-4cac-92b1-1eec31dc2c6d c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Updated VIF entry in instance network info cache for port 2706a828-75ff-4ea1-835e-f5308d75c14a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:55:09 compute-1 nova_compute[228683]: 2025-11-25 09:55:09.291 228687 DEBUG nova.network.neutron [req-1ef597e7-855d-45fa-bf1a-ac4424b64a6d req-4a5b91e7-caea-4cac-92b1-1eec31dc2c6d c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Updating instance_info_cache with network_info: [{"id": "2706a828-75ff-4ea1-835e-f5308d75c14a", "address": "fa:16:3e:c5:15:b6", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2706a828-75", "ovs_interfaceid": "2706a828-75ff-4ea1-835e-f5308d75c14a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:55:09 compute-1 nova_compute[228683]: 2025-11-25 09:55:09.309 228687 DEBUG oslo_concurrency.lockutils [req-1ef597e7-855d-45fa-bf1a-ac4424b64a6d req-4a5b91e7-caea-4cac-92b1-1eec31dc2c6d c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-07e0280f-d3d7-48db-a9c4-01836517166c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:55:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:09 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420005cd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:09 compute-1 nova_compute[228683]: 2025-11-25 09:55:09.745 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.870795) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509870819, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2388, "num_deletes": 251, "total_data_size": 6259244, "memory_usage": 6351864, "flush_reason": "Manual Compaction"}
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509880009, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 4077759, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20881, "largest_seqno": 23264, "table_properties": {"data_size": 4068072, "index_size": 6117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20087, "raw_average_key_size": 20, "raw_value_size": 4048608, "raw_average_value_size": 4101, "num_data_blocks": 267, "num_entries": 987, "num_filter_entries": 987, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064307, "oldest_key_time": 1764064307, "file_creation_time": 1764064509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 9236 microseconds, and 5728 cpu microseconds.
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.880030) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 4077759 bytes OK
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.880040) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.880555) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.880565) EVENT_LOG_v1 {"time_micros": 1764064509880562, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.880575) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 6248637, prev total WAL file size 6248637, number of live WAL files 2.
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.881697) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(3982KB)], [39(11MB)]
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509881716, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 16563833, "oldest_snapshot_seqno": -1}
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5460 keys, 14404559 bytes, temperature: kUnknown
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509915250, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 14404559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14366070, "index_size": 23723, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 137535, "raw_average_key_size": 25, "raw_value_size": 14265283, "raw_average_value_size": 2612, "num_data_blocks": 980, "num_entries": 5460, "num_filter_entries": 5460, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764064509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.915516) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14404559 bytes
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.915926) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 491.8 rd, 427.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 11.9 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 5984, records dropped: 524 output_compression: NoCompression
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.915939) EVENT_LOG_v1 {"time_micros": 1764064509915933, "job": 22, "event": "compaction_finished", "compaction_time_micros": 33677, "compaction_time_cpu_micros": 20034, "output_level": 6, "num_output_files": 1, "total_output_size": 14404559, "num_input_records": 5984, "num_output_records": 5460, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509916707, "job": 22, "event": "table_file_deletion", "file_number": 41}
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509918105, "job": 22, "event": "table_file_deletion", "file_number": 39}
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.881671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.918194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.918197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.918198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.918199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:55:09 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:55:09.918200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:55:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:10 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:10.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:10 compute-1 ceph-mon[79643]: pgmap v718: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:55:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:11.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:11 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438003030 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:11 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438003030 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:11 compute-1 ceph-mon[79643]: pgmap v719: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:55:11 compute-1 nova_compute[228683]: 2025-11-25 09:55:11.967 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:12 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420005cd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:12.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:13.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438003030 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:14 compute-1 ceph-mon[79643]: pgmap v720: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:55:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:14 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438003030 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:14.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:14 compute-1 ovn_controller[133620]: 2025-11-25T09:55:14Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c5:15:b6 10.100.0.10
Nov 25 09:55:14 compute-1 ovn_controller[133620]: 2025-11-25T09:55:14Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c5:15:b6 10.100.0.10
Nov 25 09:55:14 compute-1 nova_compute[228683]: 2025-11-25 09:55:14.747 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:15.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:15 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420005cd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:55:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:15 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:16 compute-1 ceph-mon[79643]: pgmap v721: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:55:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:16 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438003030 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:16.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:16 compute-1 nova_compute[228683]: 2025-11-25 09:55:16.970 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:17.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:17 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438003030 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:17 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004660 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:18 compute-1 ceph-mon[79643]: pgmap v722: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Nov 25 09:55:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/4116000474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3968335134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:18 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:18.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:18 compute-1 nova_compute[228683]: 2025-11-25 09:55:18.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:18 compute-1 nova_compute[228683]: 2025-11-25 09:55:18.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:18 compute-1 nova_compute[228683]: 2025-11-25 09:55:18.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:55:18 compute-1 nova_compute[228683]: 2025-11-25 09:55:18.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:18 compute-1 nova_compute[228683]: 2025-11-25 09:55:18.911 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:18 compute-1 nova_compute[228683]: 2025-11-25 09:55:18.912 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:18 compute-1 nova_compute[228683]: 2025-11-25 09:55:18.912 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:18 compute-1 nova_compute[228683]: 2025-11-25 09:55:18.912 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:55:18 compute-1 nova_compute[228683]: 2025-11-25 09:55:18.912 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:19.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:19 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:55:19 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3493657564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.241 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:19 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6444003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.288 228687 DEBUG nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.288 228687 DEBUG nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:55:19 compute-1 podman[233299]: 2025-11-25 09:55:19.30962982 +0000 UTC m=+0.040482643 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.479 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.480 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4769MB free_disk=59.94289016723633GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.481 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.481 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.535 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Instance 07e0280f-d3d7-48db-a9c4-01836517166c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.536 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.536 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:55:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3493657564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.561 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:19 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004660 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.748 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:19 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:55:19 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1215892976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.891 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.895 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.908 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.921 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:55:19 compute-1 nova_compute[228683]: 2025-11-25 09:55:19.921 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:20 compute-1 ceph-mon[79643]: pgmap v723: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 09:55:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1215892976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:20 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004250 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:20.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:20 compute-1 nova_compute[228683]: 2025-11-25 09:55:20.917 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:20 compute-1 nova_compute[228683]: 2025-11-25 09:55:20.918 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:20 compute-1 nova_compute[228683]: 2025-11-25 09:55:20.918 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:55:20 compute-1 nova_compute[228683]: 2025-11-25 09:55:20.918 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.075 228687 INFO nova.compute.manager [None req-a8ac109b-7ffa-401c-9791-96b57efe737f c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Get console output
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.078 228687 INFO oslo.privsep.daemon [None req-a8ac109b-7ffa-401c-9791-96b57efe737f c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpll6a3xol/privsep.sock']
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.167 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "refresh_cache-07e0280f-d3d7-48db-a9c4-01836517166c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.167 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquired lock "refresh_cache-07e0280f-d3d7-48db-a9c4-01836517166c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.168 228687 DEBUG nova.network.neutron [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.168 228687 DEBUG nova.objects.instance [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 07e0280f-d3d7-48db-a9c4-01836517166c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:55:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:21.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:21 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004250 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.600 228687 INFO oslo.privsep.daemon [None req-a8ac109b-7ffa-401c-9791-96b57efe737f c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Spawned new privsep daemon via rootwrap
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.520 233344 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.523 233344 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.525 233344 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.525 233344 INFO oslo.privsep.daemon [-] privsep daemon running as pid 233344
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.675 233344 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 09:55:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:21 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6444004360 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:21 compute-1 sudo[233346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:55:21 compute-1 sudo[233346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:55:21 compute-1 sudo[233346]: pam_unix(sudo:session): session closed for user root
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.973 228687 DEBUG nova.network.neutron [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Updating instance_info_cache with network_info: [{"id": "2706a828-75ff-4ea1-835e-f5308d75c14a", "address": "fa:16:3e:c5:15:b6", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2706a828-75", "ovs_interfaceid": "2706a828-75ff-4ea1-835e-f5308d75c14a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.974 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.984 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Releasing lock "refresh_cache-07e0280f-d3d7-48db-a9c4-01836517166c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.984 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.985 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.985 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:21 compute-1 nova_compute[228683]: 2025-11-25 09:55:21.985 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:22 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004660 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:55:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:22.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:55:22 compute-1 ceph-mon[79643]: pgmap v724: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 09:55:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3590733530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1842284857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:22 compute-1 nova_compute[228683]: 2025-11-25 09:55:22.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:23.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:23 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:23 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004250 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:24 compute-1 nova_compute[228683]: 2025-11-25 09:55:24.422 228687 DEBUG nova.compute.manager [req-dc8b6712-9b28-43e1-9e88-78f1918330d1 req-76939e97-60fb-4645-a401-a26cd0de591c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Received event network-changed-2706a828-75ff-4ea1-835e-f5308d75c14a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:55:24 compute-1 nova_compute[228683]: 2025-11-25 09:55:24.423 228687 DEBUG nova.compute.manager [req-dc8b6712-9b28-43e1-9e88-78f1918330d1 req-76939e97-60fb-4645-a401-a26cd0de591c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Refreshing instance network info cache due to event network-changed-2706a828-75ff-4ea1-835e-f5308d75c14a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:55:24 compute-1 nova_compute[228683]: 2025-11-25 09:55:24.423 228687 DEBUG oslo_concurrency.lockutils [req-dc8b6712-9b28-43e1-9e88-78f1918330d1 req-76939e97-60fb-4645-a401-a26cd0de591c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-07e0280f-d3d7-48db-a9c4-01836517166c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:55:24 compute-1 nova_compute[228683]: 2025-11-25 09:55:24.423 228687 DEBUG oslo_concurrency.lockutils [req-dc8b6712-9b28-43e1-9e88-78f1918330d1 req-76939e97-60fb-4645-a401-a26cd0de591c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-07e0280f-d3d7-48db-a9c4-01836517166c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:55:24 compute-1 nova_compute[228683]: 2025-11-25 09:55:24.423 228687 DEBUG nova.network.neutron [req-dc8b6712-9b28-43e1-9e88-78f1918330d1 req-76939e97-60fb-4645-a401-a26cd0de591c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Refreshing network info cache for port 2706a828-75ff-4ea1-835e-f5308d75c14a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:55:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:24 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6444004360 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:24.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:24 compute-1 nova_compute[228683]: 2025-11-25 09:55:24.749 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:24 compute-1 ceph-mon[79643]: pgmap v725: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 09:55:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:55:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:25.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:55:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:25 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004660 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:25 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:26 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004250 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:26.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:26 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:26.688 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:55:26 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:26.689 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:55:26 compute-1 nova_compute[228683]: 2025-11-25 09:55:26.689 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:26 compute-1 nova_compute[228683]: 2025-11-25 09:55:26.761 228687 DEBUG nova.network.neutron [req-dc8b6712-9b28-43e1-9e88-78f1918330d1 req-76939e97-60fb-4645-a401-a26cd0de591c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Updated VIF entry in instance network info cache for port 2706a828-75ff-4ea1-835e-f5308d75c14a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:55:26 compute-1 nova_compute[228683]: 2025-11-25 09:55:26.762 228687 DEBUG nova.network.neutron [req-dc8b6712-9b28-43e1-9e88-78f1918330d1 req-76939e97-60fb-4645-a401-a26cd0de591c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Updating instance_info_cache with network_info: [{"id": "2706a828-75ff-4ea1-835e-f5308d75c14a", "address": "fa:16:3e:c5:15:b6", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2706a828-75", "ovs_interfaceid": "2706a828-75ff-4ea1-835e-f5308d75c14a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:55:26 compute-1 ceph-mon[79643]: pgmap v726: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 09:55:26 compute-1 nova_compute[228683]: 2025-11-25 09:55:26.778 228687 DEBUG oslo_concurrency.lockutils [req-dc8b6712-9b28-43e1-9e88-78f1918330d1 req-76939e97-60fb-4645-a401-a26cd0de591c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-07e0280f-d3d7-48db-a9c4-01836517166c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:55:26 compute-1 nova_compute[228683]: 2025-11-25 09:55:26.975 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:27.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:27 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004250 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:27 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004660 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:27 compute-1 podman[233374]: 2025-11-25 09:55:27.810688316 +0000 UTC m=+0.063008241 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 09:55:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:28 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:28.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:28 compute-1 ceph-mon[79643]: pgmap v727: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 09:55:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095528 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:55:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:29.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:29 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440053e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:29 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004250 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:29 compute-1 nova_compute[228683]: 2025-11-25 09:55:29.751 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:30 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004660 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:30.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:30 compute-1 ceph-mon[79643]: pgmap v728: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 16 KiB/s wr, 0 op/s
Nov 25 09:55:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:55:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:31.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:31 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004660 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:31 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004660 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:31 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1846692248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:31 compute-1 nova_compute[228683]: 2025-11-25 09:55:31.977 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:32 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440053e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:32.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:32 compute-1 ceph-mon[79643]: pgmap v729: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 16 KiB/s wr, 1 op/s
Nov 25 09:55:32 compute-1 podman[233399]: 2025-11-25 09:55:32.823039823 +0000 UTC m=+0.077923313 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:55:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:33.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:33 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004250 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:33 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c001f90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:34 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004660 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:55:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:34.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:55:34 compute-1 nova_compute[228683]: 2025-11-25 09:55:34.753 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:34 compute-1 ceph-mon[79643]: pgmap v730: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 3.7 KiB/s wr, 0 op/s
Nov 25 09:55:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:35.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:35 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440053e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:35 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:55:35.691 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ad0cdb86-b3c6-44c6-a890-1db2efa57d2b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:35 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004250 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:36 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:55:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:36 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:36.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:36 compute-1 ceph-mon[79643]: pgmap v731: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 3.7 KiB/s wr, 0 op/s
Nov 25 09:55:36 compute-1 nova_compute[228683]: 2025-11-25 09:55:36.980 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:37.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:37 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:37 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440064e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:37 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/239035204' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:55:37 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/444976167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:55:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:38 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c001f90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:38.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:38 compute-1 ceph-mon[79643]: pgmap v732: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:55:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:39.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:39 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004660 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:39 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:55:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:39 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:55:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:39 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004660 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:39 compute-1 nova_compute[228683]: 2025-11-25 09:55:39.754 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:40 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440064e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:40.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:40 compute-1 ceph-mon[79643]: pgmap v733: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:55:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:41.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:41 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c002ca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:41 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:41 compute-1 nova_compute[228683]: 2025-11-25 09:55:41.982 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:42 compute-1 sudo[233421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:55:42 compute-1 sudo[233421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:55:42 compute-1 sudo[233421]: pam_unix(sudo:session): session closed for user root
Nov 25 09:55:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:42 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:55:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:42 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:42.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:42 compute-1 ceph-mon[79643]: pgmap v734: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 09:55:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:43.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:43 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440064e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:43 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c002ca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:44 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004660 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:44.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:44 compute-1 nova_compute[228683]: 2025-11-25 09:55:44.756 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:44 compute-1 ceph-mon[79643]: pgmap v735: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 09:55:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:45.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:45 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:45 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440064e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:55:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:46 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c002ca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:46.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:46 compute-1 ceph-mon[79643]: pgmap v736: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 09:55:46 compute-1 nova_compute[228683]: 2025-11-25 09:55:46.985 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:47.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:47 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420004660 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:47 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:48 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440064e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:48.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:48 compute-1 ceph-mon[79643]: pgmap v737: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Nov 25 09:55:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:49.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:49 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:49 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6454002600 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:49 compute-1 nova_compute[228683]: 2025-11-25 09:55:49.759 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:49 compute-1 podman[233451]: 2025-11-25 09:55:49.777053046 +0000 UTC m=+0.038287937 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 09:55:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:50 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64380041c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:50.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:50 compute-1 ceph-mon[79643]: pgmap v738: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 77 op/s
Nov 25 09:55:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:51.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:51 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64380041c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:51 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64380041c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:51 compute-1 nova_compute[228683]: 2025-11-25 09:55:51.988 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:52 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64380041c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:52.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:52 compute-1 ceph-mon[79643]: pgmap v739: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 25 09:55:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:53.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:53 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64380041c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:53 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64380041c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:53 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/785351946' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:55:53 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/785351946' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:55:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:54 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64380041c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:54.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:54 compute-1 nova_compute[228683]: 2025-11-25 09:55:54.761 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:54 compute-1 ceph-mon[79643]: pgmap v740: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:55:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:55.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:55 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64380041c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:55 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:55 compute-1 ceph-mon[79643]: pgmap v741: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:55:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:56 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440064e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:56.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:56 compute-1 nova_compute[228683]: 2025-11-25 09:55:56.990 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:57.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:57 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440064e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:57 compute-1 sudo[233472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:55:57 compute-1 sudo[233472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:55:57 compute-1 sudo[233472]: pam_unix(sudo:session): session closed for user root
Nov 25 09:55:57 compute-1 sudo[233497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:55:57 compute-1 sudo[233497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:55:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:57 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:58 compute-1 sudo[233497]: pam_unix(sudo:session): session closed for user root
Nov 25 09:55:58 compute-1 ceph-mon[79643]: pgmap v742: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:55:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:55:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:55:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:55:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:55:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:55:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:55:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:55:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:55:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:55:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:55:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:58 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:58.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:58 compute-1 podman[233551]: 2025-11-25 09:55:58.80345335 +0000 UTC m=+0.057718821 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller)
Nov 25 09:55:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:55:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:59.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:59 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:55:59 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:59 compute-1 nova_compute[228683]: 2025-11-25 09:55:59.763 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:59 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:55:59 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3372125422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:00 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:56:00 compute-1 ceph-mon[79643]: pgmap v743: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 09:56:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3372125422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:56:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:00 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64540055c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:00.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:01.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:01 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440064e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.374 228687 DEBUG oslo_concurrency.lockutils [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "07e0280f-d3d7-48db-a9c4-01836517166c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.374 228687 DEBUG oslo_concurrency.lockutils [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.375 228687 DEBUG oslo_concurrency.lockutils [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.375 228687 DEBUG oslo_concurrency.lockutils [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.375 228687 DEBUG oslo_concurrency.lockutils [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.376 228687 INFO nova.compute.manager [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Terminating instance
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.376 228687 DEBUG nova.compute.manager [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:56:01 compute-1 kernel: tap2706a828-75 (unregistering): left promiscuous mode
Nov 25 09:56:01 compute-1 NetworkManager[48856]: <info>  [1764064561.4068] device (tap2706a828-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.417 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:01 compute-1 ovn_controller[133620]: 2025-11-25T09:56:01Z|00044|binding|INFO|Releasing lport 2706a828-75ff-4ea1-835e-f5308d75c14a from this chassis (sb_readonly=0)
Nov 25 09:56:01 compute-1 ovn_controller[133620]: 2025-11-25T09:56:01Z|00045|binding|INFO|Setting lport 2706a828-75ff-4ea1-835e-f5308d75c14a down in Southbound
Nov 25 09:56:01 compute-1 ovn_controller[133620]: 2025-11-25T09:56:01Z|00046|binding|INFO|Removing iface tap2706a828-75 ovn-installed in OVS
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.419 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:01 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:01.421 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:15:b6 10.100.0.10'], port_security=['fa:16:3e:c5:15:b6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '07e0280f-d3d7-48db-a9c4-01836517166c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cdb4286d-2e7f-469b-b9a1-b5502a1f3b7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6eca7baa-8b78-4122-aef9-182609bf4892, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>], logical_port=2706a828-75ff-4ea1-835e-f5308d75c14a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:56:01 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:01.422 142940 INFO neutron.agent.ovn.metadata.agent [-] Port 2706a828-75ff-4ea1-835e-f5308d75c14a in datapath ed91d6bf-56aa-4e17-a7ca-48f04cae081d unbound from our chassis
Nov 25 09:56:01 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:01.423 142940 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed91d6bf-56aa-4e17-a7ca-48f04cae081d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:56:01 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:01.424 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd3a30e-6a01-40ce-b450-5963076a8fca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:01 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:01.426 142940 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d namespace which is not needed anymore
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.434 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:01 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 25 09:56:01 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Consumed 11.706s CPU time.
Nov 25 09:56:01 compute-1 systemd-machined[192680]: Machine qemu-2-instance-00000004 terminated.
Nov 25 09:56:01 compute-1 anacron[4572]: Job `cron.monthly' started
Nov 25 09:56:01 compute-1 anacron[4572]: Job `cron.monthly' terminated
Nov 25 09:56:01 compute-1 anacron[4572]: Normal exit (3 jobs run)
Nov 25 09:56:01 compute-1 neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d[233252]: [NOTICE]   (233256) : haproxy version is 2.8.14-c23fe91
Nov 25 09:56:01 compute-1 neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d[233252]: [NOTICE]   (233256) : path to executable is /usr/sbin/haproxy
Nov 25 09:56:01 compute-1 neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d[233252]: [ALERT]    (233256) : Current worker (233258) exited with code 143 (Terminated)
Nov 25 09:56:01 compute-1 neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d[233252]: [WARNING]  (233256) : All workers exited. Exiting... (0)
Nov 25 09:56:01 compute-1 systemd[1]: libpod-9eb98422e4c97876ea664b1f2c997f132c455d41eb4d92fc249ad3116a1d3989.scope: Deactivated successfully.
Nov 25 09:56:01 compute-1 podman[233598]: 2025-11-25 09:56:01.529169106 +0000 UTC m=+0.030826133 container died 9eb98422e4c97876ea664b1f2c997f132c455d41eb4d92fc249ad3116a1d3989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:56:01 compute-1 systemd[1]: var-lib-containers-storage-overlay-c61b8d763157fabeb1b45f4a1fdabd67b04b2892d31488fc7bbc343f8671d6a7-merged.mount: Deactivated successfully.
Nov 25 09:56:01 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9eb98422e4c97876ea664b1f2c997f132c455d41eb4d92fc249ad3116a1d3989-userdata-shm.mount: Deactivated successfully.
Nov 25 09:56:01 compute-1 podman[233598]: 2025-11-25 09:56:01.549180754 +0000 UTC m=+0.050837781 container cleanup 9eb98422e4c97876ea664b1f2c997f132c455d41eb4d92fc249ad3116a1d3989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:56:01 compute-1 systemd[1]: libpod-conmon-9eb98422e4c97876ea664b1f2c997f132c455d41eb4d92fc249ad3116a1d3989.scope: Deactivated successfully.
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.593 228687 DEBUG nova.compute.manager [req-50a063a6-81d3-47f0-8139-93d7cce4f612 req-1297e70e-f6c6-4744-ac65-285e7eb6ecee c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Received event network-vif-unplugged-2706a828-75ff-4ea1-835e-f5308d75c14a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.593 228687 DEBUG oslo_concurrency.lockutils [req-50a063a6-81d3-47f0-8139-93d7cce4f612 req-1297e70e-f6c6-4744-ac65-285e7eb6ecee c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.593 228687 DEBUG oslo_concurrency.lockutils [req-50a063a6-81d3-47f0-8139-93d7cce4f612 req-1297e70e-f6c6-4744-ac65-285e7eb6ecee c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.594 228687 DEBUG oslo_concurrency.lockutils [req-50a063a6-81d3-47f0-8139-93d7cce4f612 req-1297e70e-f6c6-4744-ac65-285e7eb6ecee c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.594 228687 DEBUG nova.compute.manager [req-50a063a6-81d3-47f0-8139-93d7cce4f612 req-1297e70e-f6c6-4744-ac65-285e7eb6ecee c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] No waiting events found dispatching network-vif-unplugged-2706a828-75ff-4ea1-835e-f5308d75c14a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.595 228687 DEBUG nova.compute.manager [req-50a063a6-81d3-47f0-8139-93d7cce4f612 req-1297e70e-f6c6-4744-ac65-285e7eb6ecee c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Received event network-vif-unplugged-2706a828-75ff-4ea1-835e-f5308d75c14a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:56:01 compute-1 podman[233624]: 2025-11-25 09:56:01.596575666 +0000 UTC m=+0.029617896 container remove 9eb98422e4c97876ea664b1f2c997f132c455d41eb4d92fc249ad3116a1d3989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.601 228687 INFO nova.virt.libvirt.driver [-] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Instance destroyed successfully.
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.601 228687 DEBUG nova.objects.instance [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'resources' on Instance uuid 07e0280f-d3d7-48db-a9c4-01836517166c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:56:01 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:01.610 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[d484d517-afe9-478b-8842-d6d8becb4133]: (4, ('Tue Nov 25 09:56:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d (9eb98422e4c97876ea664b1f2c997f132c455d41eb4d92fc249ad3116a1d3989)\n9eb98422e4c97876ea664b1f2c997f132c455d41eb4d92fc249ad3116a1d3989\nTue Nov 25 09:56:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d (9eb98422e4c97876ea664b1f2c997f132c455d41eb4d92fc249ad3116a1d3989)\n9eb98422e4c97876ea664b1f2c997f132c455d41eb4d92fc249ad3116a1d3989\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:01 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:01.612 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[a5010e73-ee06-4396-ae76-92487f1940ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:01 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:01.613 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped91d6bf-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.613 228687 DEBUG nova.virt.libvirt.vif [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:54:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1502691703',display_name='tempest-TestNetworkBasicOps-server-1502691703',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1502691703',id=4,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOeyGwyncFURVvM+9ayfAnXNf9GI1br1ZEr3Cec7qxPaAGJ0uLMok0qr7FCAA2bcXAfJWXqJKIDoOOo5jOb/vKN2AnGmZWeaehzRLzEzyVtWlX9r830132IYt/QQXy8Zjw==',key_name='tempest-TestNetworkBasicOps-21893871',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:55:04Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-2e0q5mpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:55:04Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=07e0280f-d3d7-48db-a9c4-01836517166c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2706a828-75ff-4ea1-835e-f5308d75c14a", "address": "fa:16:3e:c5:15:b6", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2706a828-75", "ovs_interfaceid": "2706a828-75ff-4ea1-835e-f5308d75c14a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.614 228687 DEBUG nova.network.os_vif_util [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "2706a828-75ff-4ea1-835e-f5308d75c14a", "address": "fa:16:3e:c5:15:b6", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2706a828-75", "ovs_interfaceid": "2706a828-75ff-4ea1-835e-f5308d75c14a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.614 228687 DEBUG nova.network.os_vif_util [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c5:15:b6,bridge_name='br-int',has_traffic_filtering=True,id=2706a828-75ff-4ea1-835e-f5308d75c14a,network=Network(ed91d6bf-56aa-4e17-a7ca-48f04cae081d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2706a828-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:56:01 compute-1 kernel: taped91d6bf-50: left promiscuous mode
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.615 228687 DEBUG os_vif [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c5:15:b6,bridge_name='br-int',has_traffic_filtering=True,id=2706a828-75ff-4ea1-835e-f5308d75c14a,network=Network(ed91d6bf-56aa-4e17-a7ca-48f04cae081d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2706a828-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.617 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.618 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2706a828-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.619 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.620 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.622 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.630 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:01 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:01.632 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd68baf-ec5b-4044-99e2-42ac16681cd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.633 228687 INFO os_vif [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c5:15:b6,bridge_name='br-int',has_traffic_filtering=True,id=2706a828-75ff-4ea1-835e-f5308d75c14a,network=Network(ed91d6bf-56aa-4e17-a7ca-48f04cae081d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2706a828-75')
Nov 25 09:56:01 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:01.642 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[72cbe431-5819-49a3-ad9b-08df915e7d57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:01 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:01.643 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[b13335db-1129-4921-8dd9-f33066cf89eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:01 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:01.654 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[11ea3ccc-791d-4fe3-9049-4948bd8885e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 328803, 'reachable_time': 19415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 233662, 'error': None, 'target': 'ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:01 compute-1 systemd[1]: run-netns-ovnmeta\x2ded91d6bf\x2d56aa\x2d4e17\x2da7ca\x2d48f04cae081d.mount: Deactivated successfully.
Nov 25 09:56:01 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:01.659 143047 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:56:01 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:01.659 143047 DEBUG oslo.privsep.daemon [-] privsep: reply[4b48945f-bf1b-4729-b4e9-ffe5948e8767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:01 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.787 228687 INFO nova.virt.libvirt.driver [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Deleting instance files /var/lib/nova/instances/07e0280f-d3d7-48db-a9c4-01836517166c_del
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.787 228687 INFO nova.virt.libvirt.driver [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Deletion of /var/lib/nova/instances/07e0280f-d3d7-48db-a9c4-01836517166c_del complete
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.823 228687 INFO nova.compute.manager [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Took 0.45 seconds to destroy the instance on the hypervisor.
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.823 228687 DEBUG oslo.service.loopingcall [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.824 228687 DEBUG nova.compute.manager [-] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:56:01 compute-1 nova_compute[228683]: 2025-11-25 09:56:01.824 228687 DEBUG nova.network.neutron [-] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:56:02 compute-1 sudo[233668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:56:02 compute-1 sudo[233668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:56:02 compute-1 sudo[233668]: pam_unix(sudo:session): session closed for user root
Nov 25 09:56:02 compute-1 sudo[233693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:56:02 compute-1 sudo[233693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:56:02 compute-1 sudo[233693]: pam_unix(sudo:session): session closed for user root
Nov 25 09:56:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:02 compute-1 ceph-mon[79643]: pgmap v744: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 318 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Nov 25 09:56:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:56:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:56:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:02 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:02.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.233 228687 DEBUG nova.network.neutron [-] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.243 228687 INFO nova.compute.manager [-] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Took 1.42 seconds to deallocate network for instance.
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.269 228687 DEBUG oslo_concurrency.lockutils [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.269 228687 DEBUG oslo_concurrency.lockutils [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:03 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:03.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.292 228687 DEBUG nova.compute.manager [req-f89b198e-60d9-4a5c-822f-711a5a87ba15 req-c13c41bc-dbc3-4724-9fd9-4757e6819b14 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Received event network-vif-deleted-2706a828-75ff-4ea1-835e-f5308d75c14a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.307 228687 DEBUG oslo_concurrency.processutils [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:56:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:03 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:56:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:03 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:56:03 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:56:03 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/660601116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.647 228687 DEBUG oslo_concurrency.processutils [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.651 228687 DEBUG nova.compute.provider_tree [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.662 228687 DEBUG nova.scheduler.client.report [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.677 228687 DEBUG nova.compute.manager [req-fe149f8e-286d-4f1d-8242-b4d28af1dc33 req-318d7847-3ec5-4508-b825-be76b7708d05 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Received event network-vif-plugged-2706a828-75ff-4ea1-835e-f5308d75c14a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.677 228687 DEBUG oslo_concurrency.lockutils [req-fe149f8e-286d-4f1d-8242-b4d28af1dc33 req-318d7847-3ec5-4508-b825-be76b7708d05 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.677 228687 DEBUG oslo_concurrency.lockutils [req-fe149f8e-286d-4f1d-8242-b4d28af1dc33 req-318d7847-3ec5-4508-b825-be76b7708d05 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.677 228687 DEBUG oslo_concurrency.lockutils [req-fe149f8e-286d-4f1d-8242-b4d28af1dc33 req-318d7847-3ec5-4508-b825-be76b7708d05 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.677 228687 DEBUG nova.compute.manager [req-fe149f8e-286d-4f1d-8242-b4d28af1dc33 req-318d7847-3ec5-4508-b825-be76b7708d05 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] No waiting events found dispatching network-vif-plugged-2706a828-75ff-4ea1-835e-f5308d75c14a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.678 228687 WARNING nova.compute.manager [req-fe149f8e-286d-4f1d-8242-b4d28af1dc33 req-318d7847-3ec5-4508-b825-be76b7708d05 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Received unexpected event network-vif-plugged-2706a828-75ff-4ea1-835e-f5308d75c14a for instance with vm_state deleted and task_state None.
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.679 228687 DEBUG oslo_concurrency.lockutils [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.698 228687 INFO nova.scheduler.client.report [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Deleted allocations for instance 07e0280f-d3d7-48db-a9c4-01836517166c
Nov 25 09:56:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:03 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440064e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:03 compute-1 nova_compute[228683]: 2025-11-25 09:56:03.742 228687 DEBUG oslo_concurrency.lockutils [None req-92dd1d95-c866-4ebf-b339-82c0775f790a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "07e0280f-d3d7-48db-a9c4-01836517166c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:03 compute-1 podman[233741]: 2025-11-25 09:56:03.787774757 +0000 UTC m=+0.040062743 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:56:04 compute-1 ceph-mon[79643]: pgmap v745: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 22 KiB/s wr, 31 op/s
Nov 25 09:56:04 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/660601116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:04 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:04.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:04 compute-1 nova_compute[228683]: 2025-11-25 09:56:04.764 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:05.001 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:05.001 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:05.001 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:05 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430002260 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:05.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:05 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:06 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:56:06 compute-1 ceph-mon[79643]: pgmap v746: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 22 KiB/s wr, 31 op/s
Nov 25 09:56:06 compute-1 nova_compute[228683]: 2025-11-25 09:56:06.620 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:06 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:06.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:07 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:07.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:07 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430002260 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:08 compute-1 nova_compute[228683]: 2025-11-25 09:56:08.160 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:08 compute-1 nova_compute[228683]: 2025-11-25 09:56:08.241 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:08 compute-1 ceph-mon[79643]: pgmap v747: 337 pgs: 337 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 24 KiB/s wr, 60 op/s
Nov 25 09:56:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:08 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440064e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:08.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:09 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:09.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:09 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:09 compute-1 nova_compute[228683]: 2025-11-25 09:56:09.766 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:10 compute-1 ceph-mon[79643]: pgmap v748: 337 pgs: 337 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 12 KiB/s wr, 59 op/s
Nov 25 09:56:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:10 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430001320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:10.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:11 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440064e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:11.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:11 compute-1 nova_compute[228683]: 2025-11-25 09:56:11.621 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:11 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:12 compute-1 ceph-mon[79643]: pgmap v749: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 12 KiB/s wr, 60 op/s
Nov 25 09:56:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:12 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:12.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095612 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:56:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64300014c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:13.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:13 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440064e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:14 compute-1 ceph-mon[79643]: pgmap v750: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Nov 25 09:56:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:14 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:14.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:14 compute-1 nova_compute[228683]: 2025-11-25 09:56:14.767 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:15 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:15.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:56:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:15 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430001660 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:16 compute-1 nova_compute[228683]: 2025-11-25 09:56:16.598 228687 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764064561.5972593, 07e0280f-d3d7-48db-a9c4-01836517166c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:56:16 compute-1 nova_compute[228683]: 2025-11-25 09:56:16.598 228687 INFO nova.compute.manager [-] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] VM Stopped (Lifecycle Event)
Nov 25 09:56:16 compute-1 ceph-mon[79643]: pgmap v751: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Nov 25 09:56:16 compute-1 nova_compute[228683]: 2025-11-25 09:56:16.621 228687 DEBUG nova.compute.manager [None req-91c4d572-29a0-4268-b02f-5d576ad684d4 - - - - - -] [instance: 07e0280f-d3d7-48db-a9c4-01836517166c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:56:16 compute-1 nova_compute[228683]: 2025-11-25 09:56:16.622 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:16 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64440064e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:16.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:17 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:17.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:17 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:18 compute-1 ceph-mon[79643]: pgmap v752: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Nov 25 09:56:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:18 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430001660 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:18.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:18 compute-1 nova_compute[228683]: 2025-11-25 09:56:18.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:18 compute-1 nova_compute[228683]: 2025-11-25 09:56:18.914 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:18 compute-1 nova_compute[228683]: 2025-11-25 09:56:18.915 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:18 compute-1 nova_compute[228683]: 2025-11-25 09:56:18.915 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:18 compute-1 nova_compute[228683]: 2025-11-25 09:56:18.915 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:56:18 compute-1 nova_compute[228683]: 2025-11-25 09:56:18.915 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:56:19 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:56:19 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/481927686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:19 compute-1 nova_compute[228683]: 2025-11-25 09:56:19.247 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:56:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:19 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430001660 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:56:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:19.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:56:19 compute-1 nova_compute[228683]: 2025-11-25 09:56:19.431 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:56:19 compute-1 nova_compute[228683]: 2025-11-25 09:56:19.432 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4940MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:56:19 compute-1 nova_compute[228683]: 2025-11-25 09:56:19.432 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:19 compute-1 nova_compute[228683]: 2025-11-25 09:56:19.432 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:19 compute-1 nova_compute[228683]: 2025-11-25 09:56:19.484 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:56:19 compute-1 nova_compute[228683]: 2025-11-25 09:56:19.484 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:56:19 compute-1 nova_compute[228683]: 2025-11-25 09:56:19.500 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:56:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2161517121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/481927686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1833756951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:19 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6430001660 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:19 compute-1 nova_compute[228683]: 2025-11-25 09:56:19.769 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:19 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:56:19 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2766823891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:19 compute-1 nova_compute[228683]: 2025-11-25 09:56:19.840 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:56:19 compute-1 nova_compute[228683]: 2025-11-25 09:56:19.843 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:56:19 compute-1 nova_compute[228683]: 2025-11-25 09:56:19.856 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:56:19 compute-1 nova_compute[228683]: 2025-11-25 09:56:19.871 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:56:19 compute-1 nova_compute[228683]: 2025-11-25 09:56:19.871 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:20 compute-1 ceph-mon[79643]: pgmap v753: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:56:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2766823891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:20 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:20.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:20 compute-1 podman[233814]: 2025-11-25 09:56:20.783842892 +0000 UTC m=+0.034990682 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 09:56:20 compute-1 nova_compute[228683]: 2025-11-25 09:56:20.871 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:20 compute-1 nova_compute[228683]: 2025-11-25 09:56:20.872 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:56:20 compute-1 nova_compute[228683]: 2025-11-25 09:56:20.872 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:56:20 compute-1 nova_compute[228683]: 2025-11-25 09:56:20.889 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:56:20 compute-1 nova_compute[228683]: 2025-11-25 09:56:20.892 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:20 compute-1 nova_compute[228683]: 2025-11-25 09:56:20.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:20 compute-1 nova_compute[228683]: 2025-11-25 09:56:20.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:20 compute-1 nova_compute[228683]: 2025-11-25 09:56:20.893 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:56:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:21 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420000df0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:21.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:21 compute-1 nova_compute[228683]: 2025-11-25 09:56:21.623 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:21 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:21 compute-1 nova_compute[228683]: 2025-11-25 09:56:21.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:22 compute-1 sudo[233831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:56:22 compute-1 sudo[233831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:56:22 compute-1 sudo[233831]: pam_unix(sudo:session): session closed for user root
Nov 25 09:56:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:22 compute-1 ceph-mon[79643]: pgmap v754: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:56:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1784564239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3008631376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:22 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:22.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:23 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:23.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:23 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64200027b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:23 compute-1 nova_compute[228683]: 2025-11-25 09:56:23.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:23 compute-1 nova_compute[228683]: 2025-11-25 09:56:23.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:24 compute-1 ceph-mon[79643]: pgmap v755: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:56:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:24 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:24.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:24 compute-1 nova_compute[228683]: 2025-11-25 09:56:24.771 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:24 compute-1 nova_compute[228683]: 2025-11-25 09:56:24.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:25 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:56:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:25.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:56:25 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1129092219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:25 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:26 compute-1 nova_compute[228683]: 2025-11-25 09:56:26.625 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:26 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64200027b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:26 compute-1 ceph-mon[79643]: pgmap v756: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:56:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:26.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:27 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:27.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:27 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:27 compute-1 nova_compute[228683]: 2025-11-25 09:56:27.890 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:28 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:28 compute-1 ceph-mon[79643]: pgmap v757: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:56:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4154839492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:56:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3271148749' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:56:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:28.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:29 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:29.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:29 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:29 compute-1 nova_compute[228683]: 2025-11-25 09:56:29.773 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:29 compute-1 podman[233860]: 2025-11-25 09:56:29.811048956 +0000 UTC m=+0.064560477 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:56:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:30 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f642c004130 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:30 compute-1 ceph-mon[79643]: pgmap v758: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:56:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:56:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:30.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:31 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:31.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:31 compute-1 nova_compute[228683]: 2025-11-25 09:56:31.628 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:31 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:32 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f645c0023d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:32 compute-1 ceph-mon[79643]: pgmap v759: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:56:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:32.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:33 compute-1 nova_compute[228683]: 2025-11-25 09:56:33.250 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:33 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:33.250 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:56:33 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:33.251 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:56:33 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:56:33.252 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ad0cdb86-b3c6-44c6-a890-1db2efa57d2b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:33 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:33.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:33 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64380043c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:34 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64200027b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:34 compute-1 ceph-mon[79643]: pgmap v760: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:56:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:34.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:34 compute-1 nova_compute[228683]: 2025-11-25 09:56:34.775 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:34 compute-1 podman[233886]: 2025-11-25 09:56:34.784892335 +0000 UTC m=+0.039553262 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 09:56:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:35 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f645c002d80 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:35.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:35 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:36 compute-1 nova_compute[228683]: 2025-11-25 09:56:36.630 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:36 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64380043c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:36 compute-1 ceph-mon[79643]: pgmap v761: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:56:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:36.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:37 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420003a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:37.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:37 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f645c002d80 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:38 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:38 compute-1 ceph-mon[79643]: pgmap v762: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:56:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:38.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:39 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64380043c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:56:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:39.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:56:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:39 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420003a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:39 compute-1 nova_compute[228683]: 2025-11-25 09:56:39.777 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:40 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f645c0044b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:40 compute-1 ceph-mon[79643]: pgmap v763: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:56:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:40.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:41 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:41.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:41 compute-1 nova_compute[228683]: 2025-11-25 09:56:41.633 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:41 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64380043c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:42 compute-1 sudo[233907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:56:42 compute-1 sudo[233907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:56:42 compute-1 sudo[233907]: pam_unix(sudo:session): session closed for user root
Nov 25 09:56:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:42 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420003a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:42 compute-1 ceph-mon[79643]: pgmap v764: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Nov 25 09:56:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:42.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:43 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f645c0044b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:43.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:43 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:44 compute-1 ovn_controller[133620]: 2025-11-25T09:56:44Z|00047|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Nov 25 09:56:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:44 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64380043c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:44 compute-1 ceph-mon[79643]: pgmap v765: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:56:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:44.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:44 compute-1 nova_compute[228683]: 2025-11-25 09:56:44.780 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:45 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64380043c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:45.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:56:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:45 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f645c0051c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:46 compute-1 nova_compute[228683]: 2025-11-25 09:56:46.635 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:46 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:46 compute-1 ceph-mon[79643]: pgmap v766: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:56:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:46.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:47 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:56:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:47.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:56:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:47 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420003a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:48 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:48 compute-1 ceph-mon[79643]: pgmap v767: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:56:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:48.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:49 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64380043c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:49.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:49 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f645c0051c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:49 compute-1 nova_compute[228683]: 2025-11-25 09:56:49.781 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:50 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420003a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:50 compute-1 ceph-mon[79643]: pgmap v768: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:56:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:50.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:51 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:51.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:51 compute-1 nova_compute[228683]: 2025-11-25 09:56:51.638 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:51 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:51 compute-1 podman[233937]: 2025-11-25 09:56:51.809006404 +0000 UTC m=+0.063699834 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 09:56:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:52 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f645c005ed0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:52 compute-1 ceph-mon[79643]: pgmap v769: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 290 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:56:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:52.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:53 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420003a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:53.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:53 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:54 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:54 compute-1 nova_compute[228683]: 2025-11-25 09:56:54.782 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:54 compute-1 ceph-mon[79643]: pgmap v770: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 11 KiB/s wr, 0 op/s
Nov 25 09:56:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/3382139923' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:56:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/3382139923' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:56:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:54.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:55 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f645c005ed0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:55.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:55 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f645c005ed0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:56 compute-1 nova_compute[228683]: 2025-11-25 09:56:56.641 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:56 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:56.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:56 compute-1 ceph-mon[79643]: pgmap v771: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 11 KiB/s wr, 0 op/s
Nov 25 09:56:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:57 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:57.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:57 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004460 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:58 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6438004460 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:58 compute-1 nova_compute[228683]: 2025-11-25 09:56:58.731 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "53e5cf08-5d11-4733-b518-a4dc16d22e15" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:58 compute-1 nova_compute[228683]: 2025-11-25 09:56:58.732 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:58 compute-1 nova_compute[228683]: 2025-11-25 09:56:58.742 228687 DEBUG nova.compute.manager [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:56:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:58.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:58 compute-1 ceph-mon[79643]: pgmap v772: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 25 09:56:58 compute-1 nova_compute[228683]: 2025-11-25 09:56:58.803 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:58 compute-1 nova_compute[228683]: 2025-11-25 09:56:58.803 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:58 compute-1 nova_compute[228683]: 2025-11-25 09:56:58.808 228687 DEBUG nova.virt.hardware [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:56:58 compute-1 nova_compute[228683]: 2025-11-25 09:56:58.808 228687 INFO nova.compute.claims [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Claim successful on node compute-1.ctlplane.example.com
Nov 25 09:56:58 compute-1 nova_compute[228683]: 2025-11-25 09:56:58.890 228687 DEBUG oslo_concurrency.processutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:56:59 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:56:59 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4012642929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.224 228687 DEBUG oslo_concurrency.processutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.229 228687 DEBUG nova.compute.provider_tree [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.244 228687 DEBUG nova.scheduler.client.report [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.258 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.258 228687 DEBUG nova.compute.manager [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.300 228687 DEBUG nova.compute.manager [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.300 228687 DEBUG nova.network.neutron [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:56:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:59 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420003a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.313 228687 INFO nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.330 228687 DEBUG nova.compute.manager [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:56:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:56:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:59.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.403 228687 DEBUG nova.compute.manager [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.404 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.405 228687 INFO nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Creating image(s)
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.422 228687 DEBUG nova.storage.rbd_utils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 53e5cf08-5d11-4733-b518-a4dc16d22e15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.439 228687 DEBUG nova.storage.rbd_utils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 53e5cf08-5d11-4733-b518-a4dc16d22e15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.456 228687 DEBUG nova.storage.rbd_utils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 53e5cf08-5d11-4733-b518-a4dc16d22e15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.458 228687 DEBUG oslo_concurrency.processutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.505 228687 DEBUG oslo_concurrency.processutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.506 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.506 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.506 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.523 228687 DEBUG nova.storage.rbd_utils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 53e5cf08-5d11-4733-b518-a4dc16d22e15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.526 228687 DEBUG oslo_concurrency.processutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 53e5cf08-5d11-4733-b518-a4dc16d22e15_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.656 228687 DEBUG oslo_concurrency.processutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 53e5cf08-5d11-4733-b518-a4dc16d22e15_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.702 228687 DEBUG nova.storage.rbd_utils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] resizing rbd image 53e5cf08-5d11-4733-b518-a4dc16d22e15_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.755 228687 DEBUG nova.objects.instance [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'migration_context' on Instance uuid 53e5cf08-5d11-4733-b518-a4dc16d22e15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.767 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.767 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Ensure instance console log exists: /var/lib/nova/instances/53e5cf08-5d11-4733-b518-a4dc16d22e15/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.768 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.768 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.768 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:56:59 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f643000a6a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:59 compute-1 nova_compute[228683]: 2025-11-25 09:56:59.783 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:59 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/4012642929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:00 compute-1 nova_compute[228683]: 2025-11-25 09:57:00.248 228687 DEBUG nova.policy [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c92fada0e9fc4e9482d24b33b311d806', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:57:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:57:00 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420003a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:00.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:00 compute-1 podman[234146]: 2025-11-25 09:57:00.799557909 +0000 UTC m=+0.054617445 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 25 09:57:00 compute-1 ceph-mon[79643]: pgmap v773: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 4.7 KiB/s wr, 0 op/s
Nov 25 09:57:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:57:01 compute-1 kernel: ganesha.nfsd[233884]: segfault at 50 ip 00007f64dcdd632e sp 00007f64a3ffe210 error 4 in libntirpc.so.5.8[7f64dcdbb000+2c000] likely on CPU 2 (core 0, socket 2)
Nov 25 09:57:01 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:57:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[232363]: 25/11/2025 09:57:01 : epoch 69257cb9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6420003a90 fd 47 proxy ignored for local
Nov 25 09:57:01 compute-1 systemd[1]: Started Process Core Dump (PID 234170/UID 0).
Nov 25 09:57:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:01.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:01 compute-1 nova_compute[228683]: 2025-11-25 09:57:01.386 228687 DEBUG nova.network.neutron [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Successfully created port: b2edc7ae-85db-40d5-b391-dc394d1fabf2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:57:01 compute-1 nova_compute[228683]: 2025-11-25 09:57:01.642 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:01 compute-1 nova_compute[228683]: 2025-11-25 09:57:01.928 228687 DEBUG nova.network.neutron [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Successfully updated port: b2edc7ae-85db-40d5-b391-dc394d1fabf2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:57:01 compute-1 nova_compute[228683]: 2025-11-25 09:57:01.941 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-53e5cf08-5d11-4733-b518-a4dc16d22e15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:57:01 compute-1 nova_compute[228683]: 2025-11-25 09:57:01.941 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-53e5cf08-5d11-4733-b518-a4dc16d22e15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:57:01 compute-1 nova_compute[228683]: 2025-11-25 09:57:01.941 228687 DEBUG nova.network.neutron [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:57:02 compute-1 nova_compute[228683]: 2025-11-25 09:57:02.022 228687 DEBUG nova.compute.manager [req-78eef1c8-81c8-430b-9293-5c79767b0cda req-bc985a50-ca2d-4d44-ae5d-2cad6a76774f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Received event network-changed-b2edc7ae-85db-40d5-b391-dc394d1fabf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:57:02 compute-1 nova_compute[228683]: 2025-11-25 09:57:02.022 228687 DEBUG nova.compute.manager [req-78eef1c8-81c8-430b-9293-5c79767b0cda req-bc985a50-ca2d-4d44-ae5d-2cad6a76774f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Refreshing instance network info cache due to event network-changed-b2edc7ae-85db-40d5-b391-dc394d1fabf2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:57:02 compute-1 nova_compute[228683]: 2025-11-25 09:57:02.023 228687 DEBUG oslo_concurrency.lockutils [req-78eef1c8-81c8-430b-9293-5c79767b0cda req-bc985a50-ca2d-4d44-ae5d-2cad6a76774f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-53e5cf08-5d11-4733-b518-a4dc16d22e15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:57:02 compute-1 nova_compute[228683]: 2025-11-25 09:57:02.083 228687 DEBUG nova.network.neutron [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:57:02 compute-1 sudo[234172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:57:02 compute-1 sudo[234172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:02 compute-1 sudo[234172]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:02 compute-1 sudo[234197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:57:02 compute-1 sudo[234197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:02 compute-1 sudo[234222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:57:02 compute-1 sudo[234222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:02 compute-1 sudo[234222]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:02 compute-1 sudo[234197]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:02 compute-1 systemd-coredump[234171]: Process 232367 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 64:
                                                    #0  0x00007f64dcdd632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:57:02 compute-1 systemd[1]: systemd-coredump@9-234170-0.service: Deactivated successfully.
Nov 25 09:57:02 compute-1 systemd[1]: systemd-coredump@9-234170-0.service: Consumed 1.362s CPU time.
Nov 25 09:57:02 compute-1 podman[234280]: 2025-11-25 09:57:02.769575962 +0000 UTC m=+0.017645931 container died 1414ee295e652c56ad5b032d63ca5158421ccdea6f71124a8c14884b43c458dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 09:57:02 compute-1 systemd[1]: var-lib-containers-storage-overlay-8bffdae5295bebf8043d7fc879b15ace735055cce44be68cb8a849c57045d316-merged.mount: Deactivated successfully.
Nov 25 09:57:02 compute-1 podman[234280]: 2025-11-25 09:57:02.788289985 +0000 UTC m=+0.036359954 container remove 1414ee295e652c56ad5b032d63ca5158421ccdea6f71124a8c14884b43c458dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:57:02 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:57:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:02.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:02 compute-1 ceph-mon[79643]: pgmap v774: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:57:02 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Failed with result 'exit-code'.
Nov 25 09:57:02 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.194s CPU time.
Nov 25 09:57:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:03.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.392 228687 DEBUG nova.network.neutron [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Updating instance_info_cache with network_info: [{"id": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "address": "fa:16:3e:32:73:02", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2edc7ae-85", "ovs_interfaceid": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.406 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-53e5cf08-5d11-4733-b518-a4dc16d22e15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.406 228687 DEBUG nova.compute.manager [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Instance network_info: |[{"id": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "address": "fa:16:3e:32:73:02", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2edc7ae-85", "ovs_interfaceid": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.406 228687 DEBUG oslo_concurrency.lockutils [req-78eef1c8-81c8-430b-9293-5c79767b0cda req-bc985a50-ca2d-4d44-ae5d-2cad6a76774f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-53e5cf08-5d11-4733-b518-a4dc16d22e15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.406 228687 DEBUG nova.network.neutron [req-78eef1c8-81c8-430b-9293-5c79767b0cda req-bc985a50-ca2d-4d44-ae5d-2cad6a76774f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Refreshing network info cache for port b2edc7ae-85db-40d5-b391-dc394d1fabf2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.409 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Start _get_guest_xml network_info=[{"id": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "address": "fa:16:3e:32:73:02", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2edc7ae-85", "ovs_interfaceid": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'boot_index': 0, 'device_name': '/dev/vda', 'image_id': '62ddd1b7-1bba-493e-a10f-b03a12ab3457'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.413 228687 WARNING nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.418 228687 DEBUG nova.virt.libvirt.host [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.418 228687 DEBUG nova.virt.libvirt.host [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.420 228687 DEBUG nova.virt.libvirt.host [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.420 228687 DEBUG nova.virt.libvirt.host [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.421 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.421 228687 DEBUG nova.virt.hardware [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T09:51:47Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d76f382e-b0e4-4c25-9fed-0129b4e3facf',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.421 228687 DEBUG nova.virt.hardware [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.421 228687 DEBUG nova.virt.hardware [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.422 228687 DEBUG nova.virt.hardware [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.422 228687 DEBUG nova.virt.hardware [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.422 228687 DEBUG nova.virt.hardware [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.422 228687 DEBUG nova.virt.hardware [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.422 228687 DEBUG nova.virt.hardware [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.422 228687 DEBUG nova.virt.hardware [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.422 228687 DEBUG nova.virt.hardware [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.423 228687 DEBUG nova.virt.hardware [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.425 228687 DEBUG oslo_concurrency.processutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:57:03 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 09:57:03 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:57:03 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3364850300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.787 228687 DEBUG oslo_concurrency.processutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.805 228687 DEBUG nova.storage.rbd_utils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 53e5cf08-5d11-4733-b518-a4dc16d22e15_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:57:03 compute-1 nova_compute[228683]: 2025-11-25 09:57:03.808 228687 DEBUG oslo_concurrency.processutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:57:03 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3364850300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:57:04 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:57:04 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1248720210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.150 228687 DEBUG oslo_concurrency.processutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.152 228687 DEBUG nova.virt.libvirt.vif [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:56:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2023050108',display_name='tempest-TestNetworkBasicOps-server-2023050108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2023050108',id=7,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIXpjoEuOPqgV4AsTstNNUWTfa0aaNuXQdP5MqqmSo79o93Keg4jRRrK20IzTqU2dcwtjvSL9ynwgR0qrziME3a4BTQXjzpiMpsdxFMBiGdjPjC5fJVezQHyvIXN436nOA==',key_name='tempest-TestNetworkBasicOps-1729635165',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-pwovpbvg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:56:59Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=53e5cf08-5d11-4733-b518-a4dc16d22e15,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "address": "fa:16:3e:32:73:02", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2edc7ae-85", "ovs_interfaceid": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.152 228687 DEBUG nova.network.os_vif_util [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "address": "fa:16:3e:32:73:02", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2edc7ae-85", "ovs_interfaceid": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.153 228687 DEBUG nova.network.os_vif_util [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:73:02,bridge_name='br-int',has_traffic_filtering=True,id=b2edc7ae-85db-40d5-b391-dc394d1fabf2,network=Network(23a0542a-b85d-40e7-8bd9-6ee0d43b0306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2edc7ae-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.154 228687 DEBUG nova.objects.instance [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'pci_devices' on Instance uuid 53e5cf08-5d11-4733-b518-a4dc16d22e15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.165 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:57:04 compute-1 nova_compute[228683]:   <uuid>53e5cf08-5d11-4733-b518-a4dc16d22e15</uuid>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   <name>instance-00000007</name>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   <memory>131072</memory>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   <vcpu>1</vcpu>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   <metadata>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <nova:name>tempest-TestNetworkBasicOps-server-2023050108</nova:name>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <nova:creationTime>2025-11-25 09:57:03</nova:creationTime>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <nova:flavor name="m1.nano">
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <nova:memory>128</nova:memory>
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <nova:disk>1</nova:disk>
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <nova:swap>0</nova:swap>
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       </nova:flavor>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <nova:owner>
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       </nova:owner>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <nova:ports>
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <nova:port uuid="b2edc7ae-85db-40d5-b391-dc394d1fabf2">
Nov 25 09:57:04 compute-1 nova_compute[228683]:           <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:         </nova:port>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       </nova:ports>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     </nova:instance>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   </metadata>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   <sysinfo type="smbios">
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <system>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <entry name="serial">53e5cf08-5d11-4733-b518-a4dc16d22e15</entry>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <entry name="uuid">53e5cf08-5d11-4733-b518-a4dc16d22e15</entry>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     </system>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   </sysinfo>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   <os>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <boot dev="hd"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <smbios mode="sysinfo"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   </os>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   <features>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <acpi/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <apic/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <vmcoreinfo/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   </features>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   <clock offset="utc">
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <timer name="hpet" present="no"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   </clock>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   <cpu mode="host-model" match="exact">
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   </cpu>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   <devices>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <disk type="network" device="disk">
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <driver type="raw" cache="none"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <source protocol="rbd" name="vms/53e5cf08-5d11-4733-b518-a4dc16d22e15_disk">
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       </source>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <auth username="openstack">
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       </auth>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <target dev="vda" bus="virtio"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     </disk>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <disk type="network" device="cdrom">
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <driver type="raw" cache="none"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <source protocol="rbd" name="vms/53e5cf08-5d11-4733-b518-a4dc16d22e15_disk.config">
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       </source>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <auth username="openstack">
Nov 25 09:57:04 compute-1 nova_compute[228683]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       </auth>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <target dev="sda" bus="sata"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     </disk>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <interface type="ethernet">
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <mac address="fa:16:3e:32:73:02"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <model type="virtio"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <mtu size="1442"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <target dev="tapb2edc7ae-85"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     </interface>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <serial type="pty">
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <log file="/var/lib/nova/instances/53e5cf08-5d11-4733-b518-a4dc16d22e15/console.log" append="off"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     </serial>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <video>
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <model type="virtio"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     </video>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <input type="tablet" bus="usb"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <rng model="virtio">
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     </rng>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <controller type="usb" index="0"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     <memballoon model="virtio">
Nov 25 09:57:04 compute-1 nova_compute[228683]:       <stats period="10"/>
Nov 25 09:57:04 compute-1 nova_compute[228683]:     </memballoon>
Nov 25 09:57:04 compute-1 nova_compute[228683]:   </devices>
Nov 25 09:57:04 compute-1 nova_compute[228683]: </domain>
Nov 25 09:57:04 compute-1 nova_compute[228683]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.165 228687 DEBUG nova.compute.manager [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Preparing to wait for external event network-vif-plugged-b2edc7ae-85db-40d5-b391-dc394d1fabf2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.166 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.166 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.166 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.166 228687 DEBUG nova.virt.libvirt.vif [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:56:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2023050108',display_name='tempest-TestNetworkBasicOps-server-2023050108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2023050108',id=7,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIXpjoEuOPqgV4AsTstNNUWTfa0aaNuXQdP5MqqmSo79o93Keg4jRRrK20IzTqU2dcwtjvSL9ynwgR0qrziME3a4BTQXjzpiMpsdxFMBiGdjPjC5fJVezQHyvIXN436nOA==',key_name='tempest-TestNetworkBasicOps-1729635165',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-pwovpbvg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:56:59Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=53e5cf08-5d11-4733-b518-a4dc16d22e15,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "address": "fa:16:3e:32:73:02", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2edc7ae-85", "ovs_interfaceid": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.167 228687 DEBUG nova.network.os_vif_util [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "address": "fa:16:3e:32:73:02", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2edc7ae-85", "ovs_interfaceid": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.167 228687 DEBUG nova.network.os_vif_util [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:73:02,bridge_name='br-int',has_traffic_filtering=True,id=b2edc7ae-85db-40d5-b391-dc394d1fabf2,network=Network(23a0542a-b85d-40e7-8bd9-6ee0d43b0306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2edc7ae-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.167 228687 DEBUG os_vif [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:73:02,bridge_name='br-int',has_traffic_filtering=True,id=b2edc7ae-85db-40d5-b391-dc394d1fabf2,network=Network(23a0542a-b85d-40e7-8bd9-6ee0d43b0306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2edc7ae-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.168 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.168 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.168 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.171 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.171 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2edc7ae-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.172 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2edc7ae-85, col_values=(('external_ids', {'iface-id': 'b2edc7ae-85db-40d5-b391-dc394d1fabf2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:73:02', 'vm-uuid': '53e5cf08-5d11-4733-b518-a4dc16d22e15'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.173 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:04 compute-1 NetworkManager[48856]: <info>  [1764064624.1740] manager: (tapb2edc7ae-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.176 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.177 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.178 228687 INFO os_vif [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:73:02,bridge_name='br-int',has_traffic_filtering=True,id=b2edc7ae-85db-40d5-b391-dc394d1fabf2,network=Network(23a0542a-b85d-40e7-8bd9-6ee0d43b0306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2edc7ae-85')
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.203 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.204 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.204 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:32:73:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.204 228687 INFO nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Using config drive
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.219 228687 DEBUG nova.storage.rbd_utils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 53e5cf08-5d11-4733-b518-a4dc16d22e15_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.477 228687 INFO nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Creating config drive at /var/lib/nova/instances/53e5cf08-5d11-4733-b518-a4dc16d22e15/disk.config
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.481 228687 DEBUG oslo_concurrency.processutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/53e5cf08-5d11-4733-b518-a4dc16d22e15/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvgv4b3xv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.597 228687 DEBUG oslo_concurrency.processutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/53e5cf08-5d11-4733-b518-a4dc16d22e15/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvgv4b3xv" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.614 228687 DEBUG nova.storage.rbd_utils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 53e5cf08-5d11-4733-b518-a4dc16d22e15_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.616 228687 DEBUG oslo_concurrency.processutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/53e5cf08-5d11-4733-b518-a4dc16d22e15/disk.config 53e5cf08-5d11-4733-b518-a4dc16d22e15_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.722 228687 DEBUG oslo_concurrency.processutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/53e5cf08-5d11-4733-b518-a4dc16d22e15/disk.config 53e5cf08-5d11-4733-b518-a4dc16d22e15_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.723 228687 INFO nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Deleting local config drive /var/lib/nova/instances/53e5cf08-5d11-4733-b518-a4dc16d22e15/disk.config because it was imported into RBD.
Nov 25 09:57:04 compute-1 systemd[1]: Starting libvirt secret daemon...
Nov 25 09:57:04 compute-1 systemd[1]: Started libvirt secret daemon.
Nov 25 09:57:04 compute-1 kernel: tapb2edc7ae-85: entered promiscuous mode
Nov 25 09:57:04 compute-1 ovn_controller[133620]: 2025-11-25T09:57:04Z|00048|binding|INFO|Claiming lport b2edc7ae-85db-40d5-b391-dc394d1fabf2 for this chassis.
Nov 25 09:57:04 compute-1 ovn_controller[133620]: 2025-11-25T09:57:04Z|00049|binding|INFO|b2edc7ae-85db-40d5-b391-dc394d1fabf2: Claiming fa:16:3e:32:73:02 10.100.0.21
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.785 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:04 compute-1 NetworkManager[48856]: <info>  [1764064624.7877] manager: (tapb2edc7ae-85): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.794 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:73:02 10.100.0.21'], port_security=['fa:16:3e:32:73:02 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': '53e5cf08-5d11-4733-b518-a4dc16d22e15', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cf74985-519e-4c22-9e8e-5d45c028c6c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e3e0a9c-90d8-4bb2-a9a5-b8401547fa81, chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>], logical_port=b2edc7ae-85db-40d5-b391-dc394d1fabf2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.795 142940 INFO neutron.agent.ovn.metadata.agent [-] Port b2edc7ae-85db-40d5-b391-dc394d1fabf2 in datapath 23a0542a-b85d-40e7-8bd9-6ee0d43b0306 bound to our chassis
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.796 142940 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 23a0542a-b85d-40e7-8bd9-6ee0d43b0306
Nov 25 09:57:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:04.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.803 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0674ac-5d49-4583-a387-cc3975359928]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.804 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap23a0542a-b1 in ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.805 231684 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap23a0542a-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.805 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[eddefd5f-d042-411f-99c6-c7d3d82dec8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.807 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[737ce0da-6d7c-4fb9-a82e-3b20abc083c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.819 143047 DEBUG oslo.privsep.daemon [-] privsep: reply[475074c9-6c2d-4ead-b6c9-b1bd44ce765b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:04 compute-1 systemd-udevd[234471]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.833 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:04 compute-1 ceph-mon[79643]: pgmap v775: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:57:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:04 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1248720210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:57:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:57:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:57:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:57:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:57:04 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:57:04 compute-1 NetworkManager[48856]: <info>  [1764064624.8438] device (tapb2edc7ae-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.842 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.845 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[073fc70d-39de-4f62-b68a-704f14e5fcbb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:04 compute-1 NetworkManager[48856]: <info>  [1764064624.8463] device (tapb2edc7ae-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:57:04 compute-1 systemd-machined[192680]: New machine qemu-3-instance-00000007.
Nov 25 09:57:04 compute-1 ovn_controller[133620]: 2025-11-25T09:57:04Z|00050|binding|INFO|Setting lport b2edc7ae-85db-40d5-b391-dc394d1fabf2 ovn-installed in OVS
Nov 25 09:57:04 compute-1 ovn_controller[133620]: 2025-11-25T09:57:04Z|00051|binding|INFO|Setting lport b2edc7ae-85db-40d5-b391-dc394d1fabf2 up in Southbound
Nov 25 09:57:04 compute-1 nova_compute[228683]: 2025-11-25 09:57:04.850 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:04 compute-1 systemd[1]: Started Virtual Machine qemu-3-instance-00000007.
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.871 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[c8bb7371-cd6b-46f5-8744-2e457cbb584d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.874 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[377f835d-b14e-41b0-a09e-6b9c575b06c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:04 compute-1 NetworkManager[48856]: <info>  [1764064624.8755] manager: (tap23a0542a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.901 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[98b0ded1-b2f5-4640-bb36-3e2d437b56a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.903 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[28addd4d-31b6-455b-918b-6de8268b72f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:04 compute-1 podman[234465]: 2025-11-25 09:57:04.919069575 +0000 UTC m=+0.082504638 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 09:57:04 compute-1 NetworkManager[48856]: <info>  [1764064624.9194] device (tap23a0542a-b0): carrier: link connected
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.924 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[c3bbed08-5423-45b2-b26b-a1e997665094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.936 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[2eeb09f9-e16a-4509-b753-8bb7d9fce63f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap23a0542a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:61:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 340946, 'reachable_time': 26080, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234510, 'error': None, 'target': 'ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.948 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[4669d3be-31bc-4408-b324-f2da95da1116]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:61b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 340946, 'tstamp': 340946}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 234511, 'error': None, 'target': 'ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.959 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[a3fb9653-d862-4a37-b283-ca6631faf755]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap23a0542a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:61:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 340946, 'reachable_time': 26080, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 234512, 'error': None, 'target': 'ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:04.978 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[55d92eb7-63b5-471e-94b0-b410d59613e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:05.001 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:05.002 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:05.002 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:05.015 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[bc6138a6-de65-4d34-8107-7e58d357e9b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:05.016 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23a0542a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:05.016 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:05.017 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23a0542a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:57:05 compute-1 NetworkManager[48856]: <info>  [1764064625.0187] manager: (tap23a0542a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 25 09:57:05 compute-1 kernel: tap23a0542a-b0: entered promiscuous mode
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:05.022 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap23a0542a-b0, col_values=(('external_ids', {'iface-id': '6cdb5dbb-946e-4292-9f33-2e4e1c3771ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.022 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:05 compute-1 ovn_controller[133620]: 2025-11-25T09:57:05Z|00052|binding|INFO|Releasing lport 6cdb5dbb-946e-4292-9f33-2e4e1c3771ee from this chassis (sb_readonly=0)
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:05.024 142940 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/23a0542a-b85d-40e7-8bd9-6ee0d43b0306.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/23a0542a-b85d-40e7-8bd9-6ee0d43b0306.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:05.029 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[e27a3fb5-a564-4add-a396-cc7342d48b88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:05.030 142940 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: global
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     log         /dev/log local0 debug
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     log-tag     haproxy-metadata-proxy-23a0542a-b85d-40e7-8bd9-6ee0d43b0306
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     user        root
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     group       root
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     maxconn     1024
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     pidfile     /var/lib/neutron/external/pids/23a0542a-b85d-40e7-8bd9-6ee0d43b0306.pid.haproxy
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     daemon
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: 
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: defaults
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     log global
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     mode http
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     option httplog
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     option dontlognull
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     option http-server-close
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     option forwardfor
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     retries                 3
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     timeout http-request    30s
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     timeout connect         30s
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     timeout client          32s
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     timeout server          32s
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     timeout http-keep-alive 30s
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: 
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: 
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: listen listener
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     bind 169.254.169.254:80
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:     http-request add-header X-OVN-Network-ID 23a0542a-b85d-40e7-8bd9-6ee0d43b0306
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:57:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:57:05.031 142940 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'env', 'PROCESS_TAG=haproxy-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/23a0542a-b85d-40e7-8bd9-6ee0d43b0306.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.037 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:05.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:05 compute-1 podman[234542]: 2025-11-25 09:57:05.351339041 +0000 UTC m=+0.038412793 container create b953be461dbe78b5489c6531ee4ad76e04b0dc07d275526988838ad7d880bfdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:57:05 compute-1 systemd[1]: Started libpod-conmon-b953be461dbe78b5489c6531ee4ad76e04b0dc07d275526988838ad7d880bfdd.scope.
Nov 25 09:57:05 compute-1 systemd[1]: Started libcrun container.
Nov 25 09:57:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13053e17ff7c74aed036a54b01e0baa350bcf23645d21f87a608ae3a91a7c2d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:05 compute-1 podman[234542]: 2025-11-25 09:57:05.420757455 +0000 UTC m=+0.107831198 container init b953be461dbe78b5489c6531ee4ad76e04b0dc07d275526988838ad7d880bfdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 09:57:05 compute-1 podman[234542]: 2025-11-25 09:57:05.425969641 +0000 UTC m=+0.113043383 container start b953be461dbe78b5489c6531ee4ad76e04b0dc07d275526988838ad7d880bfdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 09:57:05 compute-1 podman[234542]: 2025-11-25 09:57:05.334744983 +0000 UTC m=+0.021818745 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:57:05 compute-1 neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306[234594]: [NOTICE]   (234599) : New worker (234602) forked
Nov 25 09:57:05 compute-1 neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306[234594]: [NOTICE]   (234599) : Loading success.
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.447 228687 DEBUG nova.compute.manager [req-8b0be21f-8539-4a7d-a108-56a169de2d64 req-d67dac1f-724b-4823-957b-70afe7c19c0d c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Received event network-vif-plugged-b2edc7ae-85db-40d5-b391-dc394d1fabf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.448 228687 DEBUG oslo_concurrency.lockutils [req-8b0be21f-8539-4a7d-a108-56a169de2d64 req-d67dac1f-724b-4823-957b-70afe7c19c0d c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.448 228687 DEBUG oslo_concurrency.lockutils [req-8b0be21f-8539-4a7d-a108-56a169de2d64 req-d67dac1f-724b-4823-957b-70afe7c19c0d c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.448 228687 DEBUG oslo_concurrency.lockutils [req-8b0be21f-8539-4a7d-a108-56a169de2d64 req-d67dac1f-724b-4823-957b-70afe7c19c0d c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.448 228687 DEBUG nova.compute.manager [req-8b0be21f-8539-4a7d-a108-56a169de2d64 req-d67dac1f-724b-4823-957b-70afe7c19c0d c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Processing event network-vif-plugged-b2edc7ae-85db-40d5-b391-dc394d1fabf2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.455 228687 DEBUG nova.virt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Emitting event <LifecycleEvent: 1764064625.455351, 53e5cf08-5d11-4733-b518-a4dc16d22e15 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.455 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] VM Started (Lifecycle Event)
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.457 228687 DEBUG nova.compute.manager [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.463 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.465 228687 INFO nova.virt.libvirt.driver [-] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Instance spawned successfully.
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.465 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.476 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.483 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.486 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.487 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.487 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.487 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.488 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.488 228687 DEBUG nova.virt.libvirt.driver [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.505 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.505 228687 DEBUG nova.virt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Emitting event <LifecycleEvent: 1764064625.455453, 53e5cf08-5d11-4733-b518-a4dc16d22e15 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.506 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] VM Paused (Lifecycle Event)
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.527 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.529 228687 DEBUG nova.virt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Emitting event <LifecycleEvent: 1764064625.4629707, 53e5cf08-5d11-4733-b518-a4dc16d22e15 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.529 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] VM Resumed (Lifecycle Event)
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.542 228687 INFO nova.compute.manager [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Took 6.14 seconds to spawn the instance on the hypervisor.
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.542 228687 DEBUG nova.compute.manager [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.543 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.547 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.566 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.590 228687 INFO nova.compute.manager [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Took 6.81 seconds to build instance.
Nov 25 09:57:05 compute-1 nova_compute[228683]: 2025-11-25 09:57:05.603 228687 DEBUG oslo_concurrency.lockutils [None req-a3f6a217-f424-4930-8946-0df0feacf208 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:57:06 compute-1 nova_compute[228683]: 2025-11-25 09:57:06.284 228687 DEBUG nova.network.neutron [req-78eef1c8-81c8-430b-9293-5c79767b0cda req-bc985a50-ca2d-4d44-ae5d-2cad6a76774f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Updated VIF entry in instance network info cache for port b2edc7ae-85db-40d5-b391-dc394d1fabf2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:57:06 compute-1 nova_compute[228683]: 2025-11-25 09:57:06.284 228687 DEBUG nova.network.neutron [req-78eef1c8-81c8-430b-9293-5c79767b0cda req-bc985a50-ca2d-4d44-ae5d-2cad6a76774f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Updating instance_info_cache with network_info: [{"id": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "address": "fa:16:3e:32:73:02", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2edc7ae-85", "ovs_interfaceid": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:57:06 compute-1 nova_compute[228683]: 2025-11-25 09:57:06.299 228687 DEBUG oslo_concurrency.lockutils [req-78eef1c8-81c8-430b-9293-5c79767b0cda req-bc985a50-ca2d-4d44-ae5d-2cad6a76774f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-53e5cf08-5d11-4733-b518-a4dc16d22e15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:57:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:06.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:06 compute-1 ceph-mon[79643]: pgmap v776: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:57:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095707 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:57:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:07.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:07 compute-1 nova_compute[228683]: 2025-11-25 09:57:07.512 228687 DEBUG nova.compute.manager [req-44cff67d-0734-42ad-8e7a-60ae57b0d94f req-393fcfd1-991b-46b4-a5af-037634f3fc70 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Received event network-vif-plugged-b2edc7ae-85db-40d5-b391-dc394d1fabf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:57:07 compute-1 nova_compute[228683]: 2025-11-25 09:57:07.513 228687 DEBUG oslo_concurrency.lockutils [req-44cff67d-0734-42ad-8e7a-60ae57b0d94f req-393fcfd1-991b-46b4-a5af-037634f3fc70 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:57:07 compute-1 nova_compute[228683]: 2025-11-25 09:57:07.513 228687 DEBUG oslo_concurrency.lockutils [req-44cff67d-0734-42ad-8e7a-60ae57b0d94f req-393fcfd1-991b-46b4-a5af-037634f3fc70 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:57:07 compute-1 nova_compute[228683]: 2025-11-25 09:57:07.513 228687 DEBUG oslo_concurrency.lockutils [req-44cff67d-0734-42ad-8e7a-60ae57b0d94f req-393fcfd1-991b-46b4-a5af-037634f3fc70 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:57:07 compute-1 nova_compute[228683]: 2025-11-25 09:57:07.513 228687 DEBUG nova.compute.manager [req-44cff67d-0734-42ad-8e7a-60ae57b0d94f req-393fcfd1-991b-46b4-a5af-037634f3fc70 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] No waiting events found dispatching network-vif-plugged-b2edc7ae-85db-40d5-b391-dc394d1fabf2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:57:07 compute-1 nova_compute[228683]: 2025-11-25 09:57:07.513 228687 WARNING nova.compute.manager [req-44cff67d-0734-42ad-8e7a-60ae57b0d94f req-393fcfd1-991b-46b4-a5af-037634f3fc70 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Received unexpected event network-vif-plugged-b2edc7ae-85db-40d5-b391-dc394d1fabf2 for instance with vm_state active and task_state None.
Nov 25 09:57:07 compute-1 sudo[234608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:57:07 compute-1 sudo[234608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:07 compute-1 sudo[234608]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:08 compute-1 ceph-mon[79643]: pgmap v777: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 245 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 25 09:57:08 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:08 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:08.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:09 compute-1 nova_compute[228683]: 2025-11-25 09:57:09.174 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:09.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:09 compute-1 nova_compute[228683]: 2025-11-25 09:57:09.844 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:10 compute-1 ceph-mon[79643]: pgmap v778: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 244 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Nov 25 09:57:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:10.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:11.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:12 compute-1 ceph-mon[79643]: pgmap v779: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 25 09:57:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:12.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:12 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Scheduled restart job, restart counter is at 10.
Nov 25 09:57:12 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:57:12 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.194s CPU time.
Nov 25 09:57:12 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:57:13 compute-1 podman[234673]: 2025-11-25 09:57:13.153609927 +0000 UTC m=+0.033886907 container create eba3f60e070fbd8ec2d5c6afc313d2dcade77d802c695d4ece122bdfcd693d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:57:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4e819dc0d7c46ac4135fa3fee3dd3eb71f977de5c126921fe95a99989d47e3/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4e819dc0d7c46ac4135fa3fee3dd3eb71f977de5c126921fe95a99989d47e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4e819dc0d7c46ac4135fa3fee3dd3eb71f977de5c126921fe95a99989d47e3/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4e819dc0d7c46ac4135fa3fee3dd3eb71f977de5c126921fe95a99989d47e3/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.yfzsxe-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:13 compute-1 podman[234673]: 2025-11-25 09:57:13.212651341 +0000 UTC m=+0.092928331 container init eba3f60e070fbd8ec2d5c6afc313d2dcade77d802c695d4ece122bdfcd693d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:57:13 compute-1 podman[234673]: 2025-11-25 09:57:13.217014084 +0000 UTC m=+0.097291064 container start eba3f60e070fbd8ec2d5c6afc313d2dcade77d802c695d4ece122bdfcd693d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:57:13 compute-1 bash[234673]: eba3f60e070fbd8ec2d5c6afc313d2dcade77d802c695d4ece122bdfcd693d7d
Nov 25 09:57:13 compute-1 podman[234673]: 2025-11-25 09:57:13.136539009 +0000 UTC m=+0.016815999 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:57:13 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:57:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:13 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:57:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:13 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:57:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:13 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:57:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:13 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:57:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:13 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:57:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:13 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:57:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:13 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:57:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:13 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:57:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:13.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:14 compute-1 nova_compute[228683]: 2025-11-25 09:57:14.177 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:14 compute-1 ceph-mon[79643]: pgmap v780: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 09:57:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:14.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:14 compute-1 nova_compute[228683]: 2025-11-25 09:57:14.847 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:15.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:15 compute-1 ovn_controller[133620]: 2025-11-25T09:57:15Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:73:02 10.100.0.21
Nov 25 09:57:15 compute-1 ovn_controller[133620]: 2025-11-25T09:57:15Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:73:02 10.100.0.21
Nov 25 09:57:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:57:16 compute-1 ceph-mon[79643]: pgmap v781: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 09:57:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:16.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:17.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:18 compute-1 ceph-mon[79643]: pgmap v782: 337 pgs: 337 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 25 09:57:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:18.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:19 compute-1 nova_compute[228683]: 2025-11-25 09:57:19.179 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:19 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:57:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:19 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:57:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:19.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3818507626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:19 compute-1 nova_compute[228683]: 2025-11-25 09:57:19.849 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:20 compute-1 ceph-mon[79643]: pgmap v783: 337 pgs: 337 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Nov 25 09:57:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2969080741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:20.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:20 compute-1 nova_compute[228683]: 2025-11-25 09:57:20.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:20 compute-1 nova_compute[228683]: 2025-11-25 09:57:20.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:20 compute-1 nova_compute[228683]: 2025-11-25 09:57:20.917 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:57:20 compute-1 nova_compute[228683]: 2025-11-25 09:57:20.918 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:57:20 compute-1 nova_compute[228683]: 2025-11-25 09:57:20.918 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:57:20 compute-1 nova_compute[228683]: 2025-11-25 09:57:20.918 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:57:20 compute-1 nova_compute[228683]: 2025-11-25 09:57:20.918 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:57:21 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:57:21 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2898854963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.268 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.314 228687 DEBUG nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.314 228687 DEBUG nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:57:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:21.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.501 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.502 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4745MB free_disk=59.89723205566406GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.502 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.502 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.548 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Instance 53e5cf08-5d11-4733-b518-a4dc16d22e15 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.548 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.549 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.577 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:57:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2898854963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:21 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:57:21 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/881388465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.915 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.919 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.931 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.952 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:57:21 compute-1 nova_compute[228683]: 2025-11-25 09:57:21.952 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:57:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:22 compute-1 sudo[234778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:57:22 compute-1 sudo[234778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:22 compute-1 sudo[234778]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:22 compute-1 podman[234802]: 2025-11-25 09:57:22.367881981 +0000 UTC m=+0.037618341 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:57:22 compute-1 ceph-mon[79643]: pgmap v784: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Nov 25 09:57:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/881388465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:22.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:22 compute-1 nova_compute[228683]: 2025-11-25 09:57:22.952 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:22 compute-1 nova_compute[228683]: 2025-11-25 09:57:22.952 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:57:22 compute-1 nova_compute[228683]: 2025-11-25 09:57:22.952 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:57:23 compute-1 nova_compute[228683]: 2025-11-25 09:57:23.226 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "refresh_cache-53e5cf08-5d11-4733-b518-a4dc16d22e15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:57:23 compute-1 nova_compute[228683]: 2025-11-25 09:57:23.227 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquired lock "refresh_cache-53e5cf08-5d11-4733-b518-a4dc16d22e15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:57:23 compute-1 nova_compute[228683]: 2025-11-25 09:57:23.227 228687 DEBUG nova.network.neutron [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 09:57:23 compute-1 nova_compute[228683]: 2025-11-25 09:57:23.227 228687 DEBUG nova.objects.instance [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 53e5cf08-5d11-4733-b518-a4dc16d22e15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:57:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:23.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:23 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3899372740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:23 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2790036785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:24 compute-1 nova_compute[228683]: 2025-11-25 09:57:24.180 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:24 compute-1 nova_compute[228683]: 2025-11-25 09:57:24.545 228687 DEBUG nova.network.neutron [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Updating instance_info_cache with network_info: [{"id": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "address": "fa:16:3e:32:73:02", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2edc7ae-85", "ovs_interfaceid": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:57:24 compute-1 nova_compute[228683]: 2025-11-25 09:57:24.557 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Releasing lock "refresh_cache-53e5cf08-5d11-4733-b518-a4dc16d22e15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:57:24 compute-1 nova_compute[228683]: 2025-11-25 09:57:24.557 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 09:57:24 compute-1 nova_compute[228683]: 2025-11-25 09:57:24.557 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:24 compute-1 nova_compute[228683]: 2025-11-25 09:57:24.557 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:24 compute-1 nova_compute[228683]: 2025-11-25 09:57:24.557 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:24 compute-1 nova_compute[228683]: 2025-11-25 09:57:24.558 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:24 compute-1 nova_compute[228683]: 2025-11-25 09:57:24.558 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:57:24 compute-1 ceph-mon[79643]: pgmap v785: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 09:57:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:24.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:24 compute-1 nova_compute[228683]: 2025-11-25 09:57:24.851 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:24 compute-1 nova_compute[228683]: 2025-11-25 09:57:24.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:57:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:25.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:26 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4001c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:26 compute-1 ceph-mon[79643]: pgmap v786: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 09:57:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:26.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:26 compute-1 nova_compute[228683]: 2025-11-25 09:57:26.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:27 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:27.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:27 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc4001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.956707) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647956724, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1635, "num_deletes": 250, "total_data_size": 4003751, "memory_usage": 4067656, "flush_reason": "Manual Compaction"}
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647961052, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1602202, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23269, "largest_seqno": 24899, "table_properties": {"data_size": 1597003, "index_size": 2403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13718, "raw_average_key_size": 20, "raw_value_size": 1585615, "raw_average_value_size": 2373, "num_data_blocks": 105, "num_entries": 668, "num_filter_entries": 668, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064510, "oldest_key_time": 1764064510, "file_creation_time": 1764064647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 4369 microseconds, and 3075 cpu microseconds.
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.961075) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1602202 bytes OK
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.961085) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.961423) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.961433) EVENT_LOG_v1 {"time_micros": 1764064647961430, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.961441) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3996184, prev total WAL file size 3996184, number of live WAL files 2.
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.962020) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1564KB)], [42(13MB)]
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647962044, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 16006761, "oldest_snapshot_seqno": -1}
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5675 keys, 12929267 bytes, temperature: kUnknown
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647993438, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 12929267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12892178, "index_size": 21811, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 142364, "raw_average_key_size": 25, "raw_value_size": 12790567, "raw_average_value_size": 2253, "num_data_blocks": 895, "num_entries": 5675, "num_filter_entries": 5675, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764064647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.993584) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12929267 bytes
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.993975) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 509.2 rd, 411.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 13.7 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(18.1) write-amplify(8.1) OK, records in: 6128, records dropped: 453 output_compression: NoCompression
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.993988) EVENT_LOG_v1 {"time_micros": 1764064647993982, "job": 24, "event": "compaction_finished", "compaction_time_micros": 31434, "compaction_time_cpu_micros": 18720, "output_level": 6, "num_output_files": 1, "total_output_size": 12929267, "num_input_records": 6128, "num_output_records": 5675, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647994217, "job": 24, "event": "table_file_deletion", "file_number": 44}
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647995833, "job": 24, "event": "table_file_deletion", "file_number": 42}
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.961989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.995867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.995870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.995871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.995872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:27 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:27.995873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:28 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc4001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:28 compute-1 ceph-mon[79643]: pgmap v787: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Nov 25 09:57:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:28.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:29 compute-1 nova_compute[228683]: 2025-11-25 09:57:29.183 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095729 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:57:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:29 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc4001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:29.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:29 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc00023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:29 compute-1 nova_compute[228683]: 2025-11-25 09:57:29.853 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:30 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:30 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc40091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:30 compute-1 ceph-mon[79643]: pgmap v788: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 26 KiB/s wr, 5 op/s
Nov 25 09:57:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:57:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:30.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:31 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc00023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:31.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:31 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:31 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4002730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:31 compute-1 podman[234840]: 2025-11-25 09:57:31.80728117 +0000 UTC m=+0.062093919 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, tcib_managed=true)
Nov 25 09:57:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:32 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:32 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc4009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:32 compute-1 ceph-mon[79643]: pgmap v789: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 27 KiB/s wr, 6 op/s
Nov 25 09:57:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:32.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:33 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc4009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:33.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:33 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:33 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc4009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:34 compute-1 nova_compute[228683]: 2025-11-25 09:57:34.185 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:34 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:34 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc00030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:34.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:34 compute-1 ceph-mon[79643]: pgmap v790: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 2 op/s
Nov 25 09:57:34 compute-1 nova_compute[228683]: 2025-11-25 09:57:34.854 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:35 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:35.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:35 compute-1 podman[234864]: 2025-11-25 09:57:35.784942922 +0000 UTC m=+0.040597277 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 25 09:57:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:35 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:36 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400ae20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:36.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:36 compute-1 ceph-mon[79643]: pgmap v791: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 2 op/s
Nov 25 09:57:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:37 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc00030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:37.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:37 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:37 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:38 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:38 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:38.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:38 compute-1 ceph-mon[79643]: pgmap v792: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 16 KiB/s wr, 3 op/s
Nov 25 09:57:39 compute-1 nova_compute[228683]: 2025-11-25 09:57:39.187 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:39 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400ae20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:39.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:39 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:39 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc00030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:39 compute-1 nova_compute[228683]: 2025-11-25 09:57:39.856 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:40 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:40 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400bb30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:40.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:40 compute-1 ceph-mon[79643]: pgmap v793: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.1 KiB/s wr, 0 op/s
Nov 25 09:57:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:41 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:41.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:41 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:41 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:42 compute-1 sudo[234885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:57:42 compute-1 sudo[234885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:42 compute-1 sudo[234885]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:42 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:42 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc00041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:42.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:42 compute-1 ceph-mon[79643]: pgmap v794: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 6.4 KiB/s wr, 1 op/s
Nov 25 09:57:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:43 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400c450 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:43.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:43 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:43 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:44 compute-1 nova_compute[228683]: 2025-11-25 09:57:44.191 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:44 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:44 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400c450 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:44 compute-1 nova_compute[228683]: 2025-11-25 09:57:44.857 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:44.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:44 compute-1 ceph-mon[79643]: pgmap v795: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 5.3 KiB/s wr, 1 op/s
Nov 25 09:57:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:45 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc00041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:45.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:45 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:45 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400c450 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:57:46 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:46 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400c450 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:46.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:46 compute-1 ceph-mon[79643]: pgmap v796: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 5.3 KiB/s wr, 1 op/s
Nov 25 09:57:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:47 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:47.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:47 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:47 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:48 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:48 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400c5d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:48.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:48 compute-1 ceph-mon[79643]: pgmap v797: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 5.7 KiB/s wr, 1 op/s
Nov 25 09:57:49 compute-1 nova_compute[228683]: 2025-11-25 09:57:49.193 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:49 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400c5d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:49.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:49 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:49 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400c5d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:49 compute-1 nova_compute[228683]: 2025-11-25 09:57:49.859 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:49 compute-1 ceph-mon[79643]: pgmap v798: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 2.7 KiB/s wr, 0 op/s
Nov 25 09:57:50 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:50 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:50.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:51 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55a81893fbc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:51.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:51 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:51 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55a81893fbc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:52 compute-1 ceph-mon[79643]: pgmap v799: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 6.3 KiB/s wr, 1 op/s
Nov 25 09:57:52 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:52 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x55a81893fbc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:52 compute-1 podman[234915]: 2025-11-25 09:57:52.811990162 +0000 UTC m=+0.062789921 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 09:57:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:52.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:52.963333) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672963359, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 475, "num_deletes": 257, "total_data_size": 640084, "memory_usage": 649528, "flush_reason": "Manual Compaction"}
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672965173, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 422803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24904, "largest_seqno": 25374, "table_properties": {"data_size": 420215, "index_size": 624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 5728, "raw_average_key_size": 17, "raw_value_size": 415138, "raw_average_value_size": 1246, "num_data_blocks": 28, "num_entries": 333, "num_filter_entries": 333, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064648, "oldest_key_time": 1764064648, "file_creation_time": 1764064672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 1861 microseconds, and 1265 cpu microseconds.
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965197) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 422803 bytes OK
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965207) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965537) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965547) EVENT_LOG_v1 {"time_micros": 1764064672965544, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965555) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 637186, prev total WAL file size 637186, number of live WAL files 2.
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965824) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(412KB)], [45(12MB)]
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672965844, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 13352070, "oldest_snapshot_seqno": -1}
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 5486 keys, 13189484 bytes, temperature: kUnknown
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672999653, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 13189484, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13152922, "index_size": 21732, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 139601, "raw_average_key_size": 25, "raw_value_size": 13053872, "raw_average_value_size": 2379, "num_data_blocks": 886, "num_entries": 5486, "num_filter_entries": 5486, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764064672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:57:52 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:57:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:52.999815) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13189484 bytes
Nov 25 09:57:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:53.000267) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 394.3 rd, 389.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 12.3 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(62.8) write-amplify(31.2) OK, records in: 6008, records dropped: 522 output_compression: NoCompression
Nov 25 09:57:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:53.000282) EVENT_LOG_v1 {"time_micros": 1764064673000275, "job": 26, "event": "compaction_finished", "compaction_time_micros": 33865, "compaction_time_cpu_micros": 25431, "output_level": 6, "num_output_files": 1, "total_output_size": 13189484, "num_input_records": 6008, "num_output_records": 5486, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:57:53 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:57:53 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064673000539, "job": 26, "event": "table_file_deletion", "file_number": 47}
Nov 25 09:57:53 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:57:53 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064673002480, "job": 26, "event": "table_file_deletion", "file_number": 45}
Nov 25 09:57:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:53.002587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:53.002590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:53.002592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:53.002593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:57:53.002595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:53 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:53.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:53 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:53 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:53 compute-1 ceph-mon[79643]: pgmap v800: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.0 KiB/s wr, 1 op/s
Nov 25 09:57:53 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/687601177' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:57:53 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/687601177' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:57:54 compute-1 nova_compute[228683]: 2025-11-25 09:57:54.197 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:54 compute-1 ovn_controller[133620]: 2025-11-25T09:57:54Z|00053|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Nov 25 09:57:54 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:54 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:54 compute-1 nova_compute[228683]: 2025-11-25 09:57:54.861 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:54.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:55 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:55.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:55 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:55 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:56 compute-1 ceph-mon[79643]: pgmap v801: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.0 KiB/s wr, 1 op/s
Nov 25 09:57:56 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:56 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:56.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:57 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:57.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:57 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:57 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:58 compute-1 ceph-mon[79643]: pgmap v802: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 7.3 KiB/s wr, 1 op/s
Nov 25 09:57:58 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:58 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddd8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:57:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:58.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:57:59 compute-1 nova_compute[228683]: 2025-11-25 09:57:59.199 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:59 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdddc002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:57:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:59.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:59 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:57:59 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:59 compute-1 nova_compute[228683]: 2025-11-25 09:57:59.866 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:00 compute-1 ceph-mon[79643]: pgmap v803: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 7.0 KiB/s wr, 1 op/s
Nov 25 09:58:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:58:00 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:00 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:00.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:01 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddd8004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:01.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:01 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:01 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdddc003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:02 compute-1 sudo[234940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:58:02 compute-1 sudo[234940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:02 compute-1 sudo[234940]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:02 compute-1 podman[234964]: 2025-11-25 09:58:02.528579046 +0000 UTC m=+0.071943764 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 09:58:02 compute-1 ceph-mon[79643]: pgmap v804: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 8.3 KiB/s wr, 1 op/s
Nov 25 09:58:02 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:02 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:02.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:03 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:03.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:03 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:03 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddd8004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:04 compute-1 nova_compute[228683]: 2025-11-25 09:58:04.203 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:04 compute-1 ceph-mon[79643]: pgmap v805: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.7 KiB/s wr, 0 op/s
Nov 25 09:58:04 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:04 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdddc003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:04 compute-1 nova_compute[228683]: 2025-11-25 09:58:04.866 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:04.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:05.002 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:05.003 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:05.003 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:05 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:05.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:05 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:05 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:06 compute-1 ceph-mon[79643]: pgmap v806: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.7 KiB/s wr, 0 op/s
Nov 25 09:58:06 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:06 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddd8004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:06 compute-1 podman[234990]: 2025-11-25 09:58:06.81286785 +0000 UTC m=+0.059203870 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:58:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:06.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:07 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddd8004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:07.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:07 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:07 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:08 compute-1 sudo[235007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:58:08 compute-1 sudo[235007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:08 compute-1 sudo[235007]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:08 compute-1 sudo[235032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:58:08 compute-1 sudo[235032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:08 compute-1 podman[235112]: 2025-11-25 09:58:08.516603791 +0000 UTC m=+0.043413436 container exec 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:58:08 compute-1 podman[235112]: 2025-11-25 09:58:08.599728921 +0000 UTC m=+0.126538567 container exec_died 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:58:08 compute-1 ceph-mon[79643]: pgmap v807: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 15 KiB/s wr, 2 op/s
Nov 25 09:58:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:08 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:08.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:09 compute-1 podman[235221]: 2025-11-25 09:58:09.090814721 +0000 UTC m=+0.050845165 container exec 48c3be01eb68c77d87f12f950cadd5a9f0be42049d86ff37bececa6f3d988615 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:58:09 compute-1 podman[235221]: 2025-11-25 09:58:09.098595748 +0000 UTC m=+0.058626172 container exec_died 48c3be01eb68c77d87f12f950cadd5a9f0be42049d86ff37bececa6f3d988615 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:58:09 compute-1 nova_compute[228683]: 2025-11-25 09:58:09.208 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:09 compute-1 podman[235283]: 2025-11-25 09:58:09.308529844 +0000 UTC m=+0.041951259 container exec eba3f60e070fbd8ec2d5c6afc313d2dcade77d802c695d4ece122bdfcd693d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:58:09 compute-1 podman[235283]: 2025-11-25 09:58:09.316922795 +0000 UTC m=+0.050344200 container exec_died eba3f60e070fbd8ec2d5c6afc313d2dcade77d802c695d4ece122bdfcd693d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 25 09:58:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:09 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdddc0041c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:09.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:09 compute-1 podman[235336]: 2025-11-25 09:58:09.503768761 +0000 UTC m=+0.045018221 container exec 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 09:58:09 compute-1 podman[235354]: 2025-11-25 09:58:09.567568504 +0000 UTC m=+0.048617115 container exec_died 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 09:58:09 compute-1 podman[235336]: 2025-11-25 09:58:09.571838002 +0000 UTC m=+0.113087462 container exec_died 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 09:58:09 compute-1 podman[235389]: 2025-11-25 09:58:09.744990414 +0000 UTC m=+0.038945152 container exec 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, vcs-type=git, version=2.2.4, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph)
Nov 25 09:58:09 compute-1 podman[235389]: 2025-11-25 09:58:09.756736315 +0000 UTC m=+0.050691043 container exec_died 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, vcs-type=git, build-date=2023-02-22T09:23:20, distribution-scope=public, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793)
Nov 25 09:58:09 compute-1 sudo[235032]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:09 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:09 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:09 compute-1 sudo[235416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:58:09 compute-1 sudo[235416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:09 compute-1 nova_compute[228683]: 2025-11-25 09:58:09.873 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:09 compute-1 sudo[235416]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:09 compute-1 sudo[235441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:58:09 compute-1 sudo[235441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:10 compute-1 sudo[235441]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:10 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:10 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:10 compute-1 ceph-mon[79643]: pgmap v808: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 12 KiB/s wr, 2 op/s
Nov 25 09:58:10 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:10 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:10 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:58:10 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:58:10 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:10 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:10 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:58:10 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:58:10 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:58:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:10.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:11 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddd8005bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:11.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:11 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:11 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdddc0041c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:12 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:12 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:12 compute-1 ceph-mon[79643]: pgmap v809: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 2 op/s
Nov 25 09:58:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:12.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:13 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:13.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:13 compute-1 sudo[235497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:58:13 compute-1 sudo[235497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:13 compute-1 sudo[235497]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:13 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:13 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddd8005bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:14 compute-1 nova_compute[228683]: 2025-11-25 09:58:14.212 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:14 compute-1 ceph-mon[79643]: pgmap v810: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 12 KiB/s wr, 2 op/s
Nov 25 09:58:14 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:14 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdddc0041c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:14 compute-1 nova_compute[228683]: 2025-11-25 09:58:14.876 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:14.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:15 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:15.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:58:15 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:15 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:16 compute-1 ceph-mon[79643]: pgmap v811: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 12 KiB/s wr, 2 op/s
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.709 228687 DEBUG oslo_concurrency.lockutils [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "53e5cf08-5d11-4733-b518-a4dc16d22e15" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.710 228687 DEBUG oslo_concurrency.lockutils [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.710 228687 DEBUG oslo_concurrency.lockutils [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.710 228687 DEBUG oslo_concurrency.lockutils [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.710 228687 DEBUG oslo_concurrency.lockutils [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.711 228687 INFO nova.compute.manager [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Terminating instance
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.712 228687 DEBUG nova.compute.manager [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:58:16 compute-1 kernel: tapb2edc7ae-85 (unregistering): left promiscuous mode
Nov 25 09:58:16 compute-1 NetworkManager[48856]: <info>  [1764064696.7410] device (tapb2edc7ae-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.751 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:16 compute-1 ovn_controller[133620]: 2025-11-25T09:58:16Z|00054|binding|INFO|Releasing lport b2edc7ae-85db-40d5-b391-dc394d1fabf2 from this chassis (sb_readonly=0)
Nov 25 09:58:16 compute-1 ovn_controller[133620]: 2025-11-25T09:58:16Z|00055|binding|INFO|Setting lport b2edc7ae-85db-40d5-b391-dc394d1fabf2 down in Southbound
Nov 25 09:58:16 compute-1 ovn_controller[133620]: 2025-11-25T09:58:16Z|00056|binding|INFO|Removing iface tapb2edc7ae-85 ovn-installed in OVS
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.752 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:16 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:16.754 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:73:02 10.100.0.21'], port_security=['fa:16:3e:32:73:02 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': '53e5cf08-5d11-4733-b518-a4dc16d22e15', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cf74985-519e-4c22-9e8e-5d45c028c6c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e3e0a9c-90d8-4bb2-a9a5-b8401547fa81, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>], logical_port=b2edc7ae-85db-40d5-b391-dc394d1fabf2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:58:16 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:16.755 142940 INFO neutron.agent.ovn.metadata.agent [-] Port b2edc7ae-85db-40d5-b391-dc394d1fabf2 in datapath 23a0542a-b85d-40e7-8bd9-6ee0d43b0306 unbound from our chassis
Nov 25 09:58:16 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:16.757 142940 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 23a0542a-b85d-40e7-8bd9-6ee0d43b0306, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:58:16 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:16.757 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[571330f9-e995-4447-8206-162da1bfa05a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:16 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:16.758 142940 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306 namespace which is not needed anymore
Nov 25 09:58:16 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:16 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.770 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:16 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 25 09:58:16 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Consumed 12.064s CPU time.
Nov 25 09:58:16 compute-1 systemd-machined[192680]: Machine qemu-3-instance-00000007 terminated.
Nov 25 09:58:16 compute-1 neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306[234594]: [NOTICE]   (234599) : haproxy version is 2.8.14-c23fe91
Nov 25 09:58:16 compute-1 neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306[234594]: [NOTICE]   (234599) : path to executable is /usr/sbin/haproxy
Nov 25 09:58:16 compute-1 neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306[234594]: [WARNING]  (234599) : Exiting Master process...
Nov 25 09:58:16 compute-1 neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306[234594]: [ALERT]    (234599) : Current worker (234602) exited with code 143 (Terminated)
Nov 25 09:58:16 compute-1 neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306[234594]: [WARNING]  (234599) : All workers exited. Exiting... (0)
Nov 25 09:58:16 compute-1 systemd[1]: libpod-b953be461dbe78b5489c6531ee4ad76e04b0dc07d275526988838ad7d880bfdd.scope: Deactivated successfully.
Nov 25 09:58:16 compute-1 podman[235544]: 2025-11-25 09:58:16.878630659 +0000 UTC m=+0.042567089 container died b953be461dbe78b5489c6531ee4ad76e04b0dc07d275526988838ad7d880bfdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 09:58:16 compute-1 podman[235544]: 2025-11-25 09:58:16.895666982 +0000 UTC m=+0.059603423 container cleanup b953be461dbe78b5489c6531ee4ad76e04b0dc07d275526988838ad7d880bfdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:58:16 compute-1 systemd[1]: var-lib-containers-storage-overlay-d13053e17ff7c74aed036a54b01e0baa350bcf23645d21f87a608ae3a91a7c2d-merged.mount: Deactivated successfully.
Nov 25 09:58:16 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b953be461dbe78b5489c6531ee4ad76e04b0dc07d275526988838ad7d880bfdd-userdata-shm.mount: Deactivated successfully.
Nov 25 09:58:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:16.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:16 compute-1 systemd[1]: libpod-conmon-b953be461dbe78b5489c6531ee4ad76e04b0dc07d275526988838ad7d880bfdd.scope: Deactivated successfully.
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.935 228687 INFO nova.virt.libvirt.driver [-] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Instance destroyed successfully.
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.936 228687 DEBUG nova.objects.instance [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'resources' on Instance uuid 53e5cf08-5d11-4733-b518-a4dc16d22e15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.949 228687 DEBUG nova.virt.libvirt.vif [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:56:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2023050108',display_name='tempest-TestNetworkBasicOps-server-2023050108',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2023050108',id=7,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIXpjoEuOPqgV4AsTstNNUWTfa0aaNuXQdP5MqqmSo79o93Keg4jRRrK20IzTqU2dcwtjvSL9ynwgR0qrziME3a4BTQXjzpiMpsdxFMBiGdjPjC5fJVezQHyvIXN436nOA==',key_name='tempest-TestNetworkBasicOps-1729635165',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:57:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-pwovpbvg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:57:05Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=53e5cf08-5d11-4733-b518-a4dc16d22e15,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "address": "fa:16:3e:32:73:02", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2edc7ae-85", "ovs_interfaceid": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.949 228687 DEBUG nova.network.os_vif_util [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "address": "fa:16:3e:32:73:02", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2edc7ae-85", "ovs_interfaceid": "b2edc7ae-85db-40d5-b391-dc394d1fabf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.950 228687 DEBUG nova.network.os_vif_util [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:32:73:02,bridge_name='br-int',has_traffic_filtering=True,id=b2edc7ae-85db-40d5-b391-dc394d1fabf2,network=Network(23a0542a-b85d-40e7-8bd9-6ee0d43b0306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2edc7ae-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.950 228687 DEBUG os_vif [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:73:02,bridge_name='br-int',has_traffic_filtering=True,id=b2edc7ae-85db-40d5-b391-dc394d1fabf2,network=Network(23a0542a-b85d-40e7-8bd9-6ee0d43b0306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2edc7ae-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.951 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.951 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2edc7ae-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.953 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.954 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.956 228687 INFO os_vif [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:73:02,bridge_name='br-int',has_traffic_filtering=True,id=b2edc7ae-85db-40d5-b391-dc394d1fabf2,network=Network(23a0542a-b85d-40e7-8bd9-6ee0d43b0306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2edc7ae-85')
Nov 25 09:58:16 compute-1 podman[235568]: 2025-11-25 09:58:16.957069356 +0000 UTC m=+0.038336035 container remove b953be461dbe78b5489c6531ee4ad76e04b0dc07d275526988838ad7d880bfdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 09:58:16 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:16.962 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcd9ac7-b8da-48c1-a3a5-9a445bb79dbd]: (4, ('Tue Nov 25 09:58:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306 (b953be461dbe78b5489c6531ee4ad76e04b0dc07d275526988838ad7d880bfdd)\nb953be461dbe78b5489c6531ee4ad76e04b0dc07d275526988838ad7d880bfdd\nTue Nov 25 09:58:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306 (b953be461dbe78b5489c6531ee4ad76e04b0dc07d275526988838ad7d880bfdd)\nb953be461dbe78b5489c6531ee4ad76e04b0dc07d275526988838ad7d880bfdd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:16 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:16.963 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[a28bd105-05dd-4cca-a93a-0570c75b40eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:16 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:16.965 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23a0542a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:58:16 compute-1 kernel: tap23a0542a-b0: left promiscuous mode
Nov 25 09:58:16 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:16.973 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[6b56dafa-6778-4a08-b7e5-3bac7a2ce194]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.975 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:16 compute-1 nova_compute[228683]: 2025-11-25 09:58:16.985 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:16 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:16.987 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[0aac08ed-7556-4f61-8ab1-cc9ebfff79ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:16 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:16.988 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[b8a5627c-741d-4010-8a7b-6d5314c5da28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:17 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:17.002 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[8253c78b-e5e1-4cbb-8c56-79a897eaf82b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 340941, 'reachable_time': 44265, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235603, 'error': None, 'target': 'ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:17 compute-1 systemd[1]: run-netns-ovnmeta\x2d23a0542a\x2db85d\x2d40e7\x2d8bd9\x2d6ee0d43b0306.mount: Deactivated successfully.
Nov 25 09:58:17 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:17.006 143047 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:58:17 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:17.007 143047 DEBUG oslo.privsep.daemon [-] privsep: reply[65fa2c65-7021-47c9-a750-520afb53fd72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.131 228687 INFO nova.virt.libvirt.driver [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Deleting instance files /var/lib/nova/instances/53e5cf08-5d11-4733-b518-a4dc16d22e15_del
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.131 228687 INFO nova.virt.libvirt.driver [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Deletion of /var/lib/nova/instances/53e5cf08-5d11-4733-b518-a4dc16d22e15_del complete
Nov 25 09:58:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:17 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.396 228687 INFO nova.compute.manager [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Took 0.68 seconds to destroy the instance on the hypervisor.
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.397 228687 DEBUG oslo.service.loopingcall [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.397 228687 DEBUG nova.compute.manager [-] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.398 228687 DEBUG nova.network.neutron [-] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:58:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:17.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.649 228687 DEBUG nova.compute.manager [req-c6fbed81-a214-4bfc-83c0-520065e05ef4 req-19f8a2c0-3209-450c-855d-f878e61726d8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Received event network-vif-unplugged-b2edc7ae-85db-40d5-b391-dc394d1fabf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.650 228687 DEBUG oslo_concurrency.lockutils [req-c6fbed81-a214-4bfc-83c0-520065e05ef4 req-19f8a2c0-3209-450c-855d-f878e61726d8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.650 228687 DEBUG oslo_concurrency.lockutils [req-c6fbed81-a214-4bfc-83c0-520065e05ef4 req-19f8a2c0-3209-450c-855d-f878e61726d8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.652 228687 DEBUG oslo_concurrency.lockutils [req-c6fbed81-a214-4bfc-83c0-520065e05ef4 req-19f8a2c0-3209-450c-855d-f878e61726d8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.652 228687 DEBUG nova.compute.manager [req-c6fbed81-a214-4bfc-83c0-520065e05ef4 req-19f8a2c0-3209-450c-855d-f878e61726d8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] No waiting events found dispatching network-vif-unplugged-b2edc7ae-85db-40d5-b391-dc394d1fabf2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.652 228687 DEBUG nova.compute.manager [req-c6fbed81-a214-4bfc-83c0-520065e05ef4 req-19f8a2c0-3209-450c-855d-f878e61726d8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Received event network-vif-unplugged-b2edc7ae-85db-40d5-b391-dc394d1fabf2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:58:17 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:17 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.867 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:17 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:17.868 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:58:17 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:17.870 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.912 228687 DEBUG nova.network.neutron [-] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.927 228687 INFO nova.compute.manager [-] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Took 0.53 seconds to deallocate network for instance.
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.962 228687 DEBUG oslo_concurrency.lockutils [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:17 compute-1 nova_compute[228683]: 2025-11-25 09:58:17.963 228687 DEBUG oslo_concurrency.lockutils [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:18 compute-1 nova_compute[228683]: 2025-11-25 09:58:18.004 228687 DEBUG oslo_concurrency.processutils [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:58:18 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:58:18 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/346978042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:18 compute-1 nova_compute[228683]: 2025-11-25 09:58:18.344 228687 DEBUG oslo_concurrency.processutils [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:58:18 compute-1 nova_compute[228683]: 2025-11-25 09:58:18.348 228687 DEBUG nova.compute.provider_tree [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:58:18 compute-1 nova_compute[228683]: 2025-11-25 09:58:18.360 228687 DEBUG nova.scheduler.client.report [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:58:18 compute-1 nova_compute[228683]: 2025-11-25 09:58:18.377 228687 DEBUG oslo_concurrency.lockutils [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:18 compute-1 nova_compute[228683]: 2025-11-25 09:58:18.482 228687 INFO nova.scheduler.client.report [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Deleted allocations for instance 53e5cf08-5d11-4733-b518-a4dc16d22e15
Nov 25 09:58:18 compute-1 nova_compute[228683]: 2025-11-25 09:58:18.525 228687 DEBUG oslo_concurrency.lockutils [None req-d7659fff-d8ac-485f-ace9-f3e81c0000a4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:18 compute-1 ceph-mon[79643]: pgmap v812: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 13 KiB/s wr, 2 op/s
Nov 25 09:58:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/346978042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:18 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:18 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddd80068d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:18.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:19 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:19.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:19 compute-1 nova_compute[228683]: 2025-11-25 09:58:19.716 228687 DEBUG nova.compute.manager [req-33af0af4-98e3-496f-a08d-935b5e0cb776 req-f15ec873-5a8a-4c66-b737-44a6919199e9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Received event network-vif-plugged-b2edc7ae-85db-40d5-b391-dc394d1fabf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:58:19 compute-1 nova_compute[228683]: 2025-11-25 09:58:19.716 228687 DEBUG oslo_concurrency.lockutils [req-33af0af4-98e3-496f-a08d-935b5e0cb776 req-f15ec873-5a8a-4c66-b737-44a6919199e9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:19 compute-1 nova_compute[228683]: 2025-11-25 09:58:19.717 228687 DEBUG oslo_concurrency.lockutils [req-33af0af4-98e3-496f-a08d-935b5e0cb776 req-f15ec873-5a8a-4c66-b737-44a6919199e9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:19 compute-1 nova_compute[228683]: 2025-11-25 09:58:19.717 228687 DEBUG oslo_concurrency.lockutils [req-33af0af4-98e3-496f-a08d-935b5e0cb776 req-f15ec873-5a8a-4c66-b737-44a6919199e9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "53e5cf08-5d11-4733-b518-a4dc16d22e15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:19 compute-1 nova_compute[228683]: 2025-11-25 09:58:19.717 228687 DEBUG nova.compute.manager [req-33af0af4-98e3-496f-a08d-935b5e0cb776 req-f15ec873-5a8a-4c66-b737-44a6919199e9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] No waiting events found dispatching network-vif-plugged-b2edc7ae-85db-40d5-b391-dc394d1fabf2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:58:19 compute-1 nova_compute[228683]: 2025-11-25 09:58:19.717 228687 WARNING nova.compute.manager [req-33af0af4-98e3-496f-a08d-935b5e0cb776 req-f15ec873-5a8a-4c66-b737-44a6919199e9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Received unexpected event network-vif-plugged-b2edc7ae-85db-40d5-b391-dc394d1fabf2 for instance with vm_state deleted and task_state None.
Nov 25 09:58:19 compute-1 nova_compute[228683]: 2025-11-25 09:58:19.717 228687 DEBUG nova.compute.manager [req-33af0af4-98e3-496f-a08d-935b5e0cb776 req-f15ec873-5a8a-4c66-b737-44a6919199e9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Received event network-vif-deleted-b2edc7ae-85db-40d5-b391-dc394d1fabf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:58:19 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:19 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdddc0052c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:19 compute-1 nova_compute[228683]: 2025-11-25 09:58:19.877 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:20 compute-1 ceph-mon[79643]: pgmap v813: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 2.3 KiB/s wr, 0 op/s
Nov 25 09:58:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1899497758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/323938496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:20 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:20 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:20 compute-1 nova_compute[228683]: 2025-11-25 09:58:20.890 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:20.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:21 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddd80068d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:58:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:21.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:58:21 compute-1 nova_compute[228683]: 2025-11-25 09:58:21.417 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:21 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:21 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:21 compute-1 nova_compute[228683]: 2025-11-25 09:58:21.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:21 compute-1 nova_compute[228683]: 2025-11-25 09:58:21.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:58:21 compute-1 nova_compute[228683]: 2025-11-25 09:58:21.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:58:21 compute-1 nova_compute[228683]: 2025-11-25 09:58:21.903 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:58:21 compute-1 nova_compute[228683]: 2025-11-25 09:58:21.903 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:21 compute-1 nova_compute[228683]: 2025-11-25 09:58:21.903 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:58:21 compute-1 nova_compute[228683]: 2025-11-25 09:58:21.953 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:22 compute-1 sudo[235635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:58:22 compute-1 sudo[235635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:22 compute-1 sudo[235635]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:22 compute-1 ceph-mon[79643]: pgmap v814: 337 pgs: 337 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 25 09:58:22 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:22 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdddc0052c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:22 compute-1 nova_compute[228683]: 2025-11-25 09:58:22.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:22 compute-1 nova_compute[228683]: 2025-11-25 09:58:22.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:22 compute-1 nova_compute[228683]: 2025-11-25 09:58:22.911 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:22 compute-1 nova_compute[228683]: 2025-11-25 09:58:22.911 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:22 compute-1 nova_compute[228683]: 2025-11-25 09:58:22.911 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:22 compute-1 nova_compute[228683]: 2025-11-25 09:58:22.911 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:58:22 compute-1 nova_compute[228683]: 2025-11-25 09:58:22.911 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:58:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:22.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:23 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:58:23 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2920216429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:23 compute-1 nova_compute[228683]: 2025-11-25 09:58:23.250 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:58:23 compute-1 podman[235684]: 2025-11-25 09:58:23.324970259 +0000 UTC m=+0.047566685 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 09:58:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:23 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:23.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:23 compute-1 nova_compute[228683]: 2025-11-25 09:58:23.469 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:58:23 compute-1 nova_compute[228683]: 2025-11-25 09:58:23.470 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4835MB free_disk=59.942447662353516GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:58:23 compute-1 nova_compute[228683]: 2025-11-25 09:58:23.471 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:23 compute-1 nova_compute[228683]: 2025-11-25 09:58:23.471 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:23 compute-1 nova_compute[228683]: 2025-11-25 09:58:23.590 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:58:23 compute-1 nova_compute[228683]: 2025-11-25 09:58:23.591 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:58:23 compute-1 nova_compute[228683]: 2025-11-25 09:58:23.602 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Refreshing inventories for resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 09:58:23 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2920216429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:23 compute-1 nova_compute[228683]: 2025-11-25 09:58:23.663 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Updating ProviderTree inventory for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 09:58:23 compute-1 nova_compute[228683]: 2025-11-25 09:58:23.664 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Updating inventory in ProviderTree for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:58:23 compute-1 nova_compute[228683]: 2025-11-25 09:58:23.687 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Refreshing aggregate associations for resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 09:58:23 compute-1 nova_compute[228683]: 2025-11-25 09:58:23.703 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Refreshing trait associations for resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SHA,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX512VAES,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 09:58:23 compute-1 nova_compute[228683]: 2025-11-25 09:58:23.737 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:58:23 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:23 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddd80068d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:24 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:58:24 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3779375704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:24 compute-1 nova_compute[228683]: 2025-11-25 09:58:24.075 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:58:24 compute-1 nova_compute[228683]: 2025-11-25 09:58:24.079 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:58:24 compute-1 nova_compute[228683]: 2025-11-25 09:58:24.214 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:58:24 compute-1 nova_compute[228683]: 2025-11-25 09:58:24.238 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:58:24 compute-1 nova_compute[228683]: 2025-11-25 09:58:24.238 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:24 compute-1 ceph-mon[79643]: pgmap v815: 337 pgs: 337 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 25 09:58:24 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3779375704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:24 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:24 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:24 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:58:24.871 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ad0cdb86-b3c6-44c6-a890-1db2efa57d2b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:58:24 compute-1 nova_compute[228683]: 2025-11-25 09:58:24.879 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:24.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:25 compute-1 nova_compute[228683]: 2025-11-25 09:58:25.238 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdddc0052c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:25.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:25 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1288411704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:25 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3997797534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:25 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4173911749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:25 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:25 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:25 compute-1 nova_compute[228683]: 2025-11-25 09:58:25.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:26 compute-1 ceph-mon[79643]: pgmap v816: 337 pgs: 337 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 25 09:58:26 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:26 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddd80068d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:26 compute-1 nova_compute[228683]: 2025-11-25 09:58:26.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:26 compute-1 nova_compute[228683]: 2025-11-25 09:58:26.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:26 compute-1 nova_compute[228683]: 2025-11-25 09:58:26.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:26 compute-1 nova_compute[228683]: 2025-11-25 09:58:26.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 09:58:26 compute-1 nova_compute[228683]: 2025-11-25 09:58:26.909 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 09:58:26 compute-1 nova_compute[228683]: 2025-11-25 09:58:26.909 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:26.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:26 compute-1 nova_compute[228683]: 2025-11-25 09:58:26.956 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:27 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc400d7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:27.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:27 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:27 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdddc0052c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:27 compute-1 nova_compute[228683]: 2025-11-25 09:58:27.900 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:27 compute-1 nova_compute[228683]: 2025-11-25 09:58:27.915 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:27 compute-1 nova_compute[228683]: 2025-11-25 09:58:27.915 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 09:58:28 compute-1 ceph-mon[79643]: pgmap v817: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 5.3 KiB/s wr, 56 op/s
Nov 25 09:58:28 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:28 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb4003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:28.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:29 compute-1 kernel: ganesha.nfsd[234936]: segfault at 50 ip 00007fde7274532e sp 00007fde41ffa210 error 4 in libntirpc.so.5.8[7fde7272a000+2c000] likely on CPU 2 (core 0, socket 2)
Nov 25 09:58:29 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:58:29 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe[234685]: 25/11/2025 09:58:29 : epoch 69257d79 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddd80068d0 fd 38 proxy ignored for local
Nov 25 09:58:29 compute-1 systemd[1]: Started Process Core Dump (PID 235727/UID 0).
Nov 25 09:58:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:29.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:29 compute-1 nova_compute[228683]: 2025-11-25 09:58:29.880 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:30 compute-1 systemd-coredump[235728]: Process 234690 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 56:
                                                    #0  0x00007fde7274532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:58:30 compute-1 systemd[1]: systemd-coredump@10-235727-0.service: Deactivated successfully.
Nov 25 09:58:30 compute-1 podman[235733]: 2025-11-25 09:58:30.445963313 +0000 UTC m=+0.018785663 container died eba3f60e070fbd8ec2d5c6afc313d2dcade77d802c695d4ece122bdfcd693d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:58:30 compute-1 systemd[1]: var-lib-containers-storage-overlay-6b4e819dc0d7c46ac4135fa3fee3dd3eb71f977de5c126921fe95a99989d47e3-merged.mount: Deactivated successfully.
Nov 25 09:58:30 compute-1 podman[235733]: 2025-11-25 09:58:30.464605484 +0000 UTC m=+0.037427834 container remove eba3f60e070fbd8ec2d5c6afc313d2dcade77d802c695d4ece122bdfcd693d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-0-0-compute-1-yfzsxe, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 25 09:58:30 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:58:30 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Failed with result 'exit-code'.
Nov 25 09:58:30 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.025s CPU time.
Nov 25 09:58:30 compute-1 ceph-mon[79643]: pgmap v818: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.0 KiB/s wr, 56 op/s
Nov 25 09:58:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:58:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:30.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:31.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:31 compute-1 nova_compute[228683]: 2025-11-25 09:58:31.935 228687 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764064696.9333432, 53e5cf08-5d11-4733-b518-a4dc16d22e15 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:58:31 compute-1 nova_compute[228683]: 2025-11-25 09:58:31.935 228687 INFO nova.compute.manager [-] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] VM Stopped (Lifecycle Event)
Nov 25 09:58:31 compute-1 nova_compute[228683]: 2025-11-25 09:58:31.958 228687 DEBUG nova.compute.manager [None req-cf262108-5224-41b6-89bb-a010880d01ed - - - - - -] [instance: 53e5cf08-5d11-4733-b518-a4dc16d22e15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:58:31 compute-1 nova_compute[228683]: 2025-11-25 09:58:31.959 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:32 compute-1 ceph-mon[79643]: pgmap v819: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.0 KiB/s wr, 56 op/s
Nov 25 09:58:32 compute-1 podman[235767]: 2025-11-25 09:58:32.805850361 +0000 UTC m=+0.059296486 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 25 09:58:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:32.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:33.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:34 compute-1 ceph-mon[79643]: pgmap v820: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:58:34 compute-1 nova_compute[228683]: 2025-11-25 09:58:34.882 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:34.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:35 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095835 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:58:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:35.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:36 compute-1 ceph-mon[79643]: pgmap v821: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:58:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:36.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:36 compute-1 nova_compute[228683]: 2025-11-25 09:58:36.962 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:36 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095836 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:58:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:37.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:37 compute-1 podman[235793]: 2025-11-25 09:58:37.789902912 +0000 UTC m=+0.044293375 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:58:38 compute-1 ceph-mon[79643]: pgmap v822: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 25 09:58:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:38.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:39.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:39 compute-1 nova_compute[228683]: 2025-11-25 09:58:39.884 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:40 compute-1 ceph-mon[79643]: pgmap v823: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 25 09:58:40 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Scheduled restart job, restart counter is at 11.
Nov 25 09:58:40 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:58:40 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Consumed 1.025s CPU time.
Nov 25 09:58:40 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Start request repeated too quickly.
Nov 25 09:58:40 compute-1 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.0.0.compute-1.yfzsxe.service: Failed with result 'exit-code'.
Nov 25 09:58:40 compute-1 systemd[1]: Failed to start Ceph nfs.cephfs.0.0.compute-1.yfzsxe for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:58:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:40.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:41.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.686122) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721686146, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 749, "num_deletes": 251, "total_data_size": 1539127, "memory_usage": 1564376, "flush_reason": "Manual Compaction"}
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721689753, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 1017328, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25379, "largest_seqno": 26123, "table_properties": {"data_size": 1013658, "index_size": 1514, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8548, "raw_average_key_size": 19, "raw_value_size": 1006258, "raw_average_value_size": 2323, "num_data_blocks": 66, "num_entries": 433, "num_filter_entries": 433, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064673, "oldest_key_time": 1764064673, "file_creation_time": 1764064721, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 3657 microseconds, and 2414 cpu microseconds.
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.689780) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 1017328 bytes OK
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.689790) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.690133) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.690142) EVENT_LOG_v1 {"time_micros": 1764064721690139, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.690151) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1535109, prev total WAL file size 1535109, number of live WAL files 2.
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.690551) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(993KB)], [48(12MB)]
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721690582, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 14206812, "oldest_snapshot_seqno": -1}
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5401 keys, 12081670 bytes, temperature: kUnknown
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721715583, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 12081670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12046641, "index_size": 20455, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 138529, "raw_average_key_size": 25, "raw_value_size": 11950010, "raw_average_value_size": 2212, "num_data_blocks": 829, "num_entries": 5401, "num_filter_entries": 5401, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764064721, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.715742) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12081670 bytes
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.716233) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 567.3 rd, 482.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.6 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(25.8) write-amplify(11.9) OK, records in: 5919, records dropped: 518 output_compression: NoCompression
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.716250) EVENT_LOG_v1 {"time_micros": 1764064721716243, "job": 28, "event": "compaction_finished", "compaction_time_micros": 25043, "compaction_time_cpu_micros": 18694, "output_level": 6, "num_output_files": 1, "total_output_size": 12081670, "num_input_records": 5919, "num_output_records": 5401, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721716496, "job": 28, "event": "table_file_deletion", "file_number": 50}
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721718017, "job": 28, "event": "table_file_deletion", "file_number": 48}
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.690490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.718038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.718041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.718042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.718043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:58:41 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-09:58:41.718044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:58:41 compute-1 nova_compute[228683]: 2025-11-25 09:58:41.966 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:42 compute-1 sudo[235812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:58:42 compute-1 sudo[235812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:42 compute-1 sudo[235812]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:42 compute-1 ceph-mon[79643]: pgmap v824: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 25 09:58:42 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/110315522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:42.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:58:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:43.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:58:44 compute-1 ceph-mon[79643]: pgmap v825: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 25 09:58:44 compute-1 nova_compute[228683]: 2025-11-25 09:58:44.886 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:44.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:45.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:58:46 compute-1 ceph-mon[79643]: pgmap v826: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 25 09:58:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:46.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:46 compute-1 nova_compute[228683]: 2025-11-25 09:58:46.969 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:47.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:48 compute-1 ceph-mon[79643]: pgmap v827: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:58:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:48.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:49.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:49 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/4089513043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:58:49 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/998264127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:58:49 compute-1 nova_compute[228683]: 2025-11-25 09:58:49.888 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:50 compute-1 ceph-mon[79643]: pgmap v828: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:58:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:50.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:51.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:51 compute-1 nova_compute[228683]: 2025-11-25 09:58:51.973 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:52 compute-1 ceph-mon[79643]: pgmap v829: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 25 09:58:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:52.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:53.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:53 compute-1 podman[235843]: 2025-11-25 09:58:53.791996942 +0000 UTC m=+0.040525522 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:58:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 09:58:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/719374702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:58:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 09:58:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/719374702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:58:54 compute-1 ceph-mon[79643]: pgmap v830: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 25 09:58:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/719374702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:58:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/719374702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:58:54 compute-1 nova_compute[228683]: 2025-11-25 09:58:54.889 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:54.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:55.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:56 compute-1 ceph-mon[79643]: pgmap v831: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 25 09:58:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:58:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:56.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:58:56 compute-1 nova_compute[228683]: 2025-11-25 09:58:56.976 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:57.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:57 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1528136379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:58 compute-1 ceph-mon[79643]: pgmap v832: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 09:58:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:58.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:58:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:59.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:59 compute-1 nova_compute[228683]: 2025-11-25 09:58:59.891 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:00 compute-1 ceph-mon[79643]: pgmap v833: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 09:59:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:59:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:00.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:01.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:01 compute-1 nova_compute[228683]: 2025-11-25 09:59:01.979 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:02 compute-1 sudo[235865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:59:02 compute-1 sudo[235865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:02 compute-1 sudo[235865]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:02 compute-1 ceph-mon[79643]: pgmap v834: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 09:59:02 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/236858694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:02.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:03.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:03 compute-1 podman[235891]: 2025-11-25 09:59:03.795904415 +0000 UTC m=+0.050677568 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:59:03 compute-1 ovn_controller[133620]: 2025-11-25T09:59:03Z|00057|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 09:59:04 compute-1 ceph-mon[79643]: pgmap v835: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 80 op/s
Nov 25 09:59:04 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 09:59:04 compute-1 nova_compute[228683]: 2025-11-25 09:59:04.893 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:04.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:59:05.003 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:59:05.003 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:59:05.003 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:05 compute-1 nova_compute[228683]: 2025-11-25 09:59:05.102 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:05.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:06 compute-1 ceph-mon[79643]: pgmap v836: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 80 op/s
Nov 25 09:59:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3197358068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:59:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:06.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:06 compute-1 nova_compute[228683]: 2025-11-25 09:59:06.982 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:07.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:07 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1630008766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:59:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [WARNING] 328/095908 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:59:08 compute-1 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq[87573]: [ALERT] 328/095908 (4) : backend 'backend' has no server available!
Nov 25 09:59:08 compute-1 podman[235917]: 2025-11-25 09:59:08.782156353 +0000 UTC m=+0.036628306 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:59:08 compute-1 ceph-mon[79643]: pgmap v837: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 25 09:59:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:08.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:09.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:09 compute-1 nova_compute[228683]: 2025-11-25 09:59:09.896 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:10 compute-1 ceph-mon[79643]: pgmap v838: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:59:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:10.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:11.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:11 compute-1 nova_compute[228683]: 2025-11-25 09:59:11.985 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:12 compute-1 ceph-mon[79643]: pgmap v839: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Nov 25 09:59:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:12.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:13.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:13 compute-1 sudo[235937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:59:13 compute-1 sudo[235937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:13 compute-1 sudo[235937]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:13 compute-1 sudo[235962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:59:13 compute-1 sudo[235962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:14 compute-1 sudo[235962]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:14 compute-1 ceph-mon[79643]: pgmap v840: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Nov 25 09:59:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:59:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:59:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:59:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:59:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:59:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:59:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:59:14 compute-1 nova_compute[228683]: 2025-11-25 09:59:14.898 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:14.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:15.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:59:16 compute-1 ceph-mon[79643]: pgmap v841: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Nov 25 09:59:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3465213788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:16 compute-1 nova_compute[228683]: 2025-11-25 09:59:16.989 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:59:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:16.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:59:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:17.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:17 compute-1 sudo[236018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:59:17 compute-1 sudo[236018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:17 compute-1 sudo[236018]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:18 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:59:18.374 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:59:18 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:59:18.375 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:59:18 compute-1 nova_compute[228683]: 2025-11-25 09:59:18.375 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:18 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:59:18 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:59:18 compute-1 ceph-mon[79643]: pgmap v842: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 09:59:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:18.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:19.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:19 compute-1 nova_compute[228683]: 2025-11-25 09:59:19.900 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:20 compute-1 ceph-mon[79643]: pgmap v843: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 09:59:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:20.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:21.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/801279752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/253252585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:21 compute-1 nova_compute[228683]: 2025-11-25 09:59:21.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:21 compute-1 nova_compute[228683]: 2025-11-25 09:59:21.893 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:59:21 compute-1 nova_compute[228683]: 2025-11-25 09:59:21.992 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:22 compute-1 ceph-mon[79643]: pgmap v844: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 25 09:59:22 compute-1 sudo[236046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:59:22 compute-1 sudo[236046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:22 compute-1 sudo[236046]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:22 compute-1 nova_compute[228683]: 2025-11-25 09:59:22.890 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:23.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:23.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:23 compute-1 nova_compute[228683]: 2025-11-25 09:59:23.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:23 compute-1 nova_compute[228683]: 2025-11-25 09:59:23.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:59:23 compute-1 nova_compute[228683]: 2025-11-25 09:59:23.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:59:23 compute-1 nova_compute[228683]: 2025-11-25 09:59:23.913 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:59:24 compute-1 ovn_metadata_agent[142935]: 2025-11-25 09:59:24.377 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ad0cdb86-b3c6-44c6-a890-1db2efa57d2b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:24 compute-1 ceph-mon[79643]: pgmap v845: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 708 KiB/s rd, 1.4 KiB/s wr, 49 op/s
Nov 25 09:59:24 compute-1 podman[236072]: 2025-11-25 09:59:24.787251733 +0000 UTC m=+0.040502450 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 09:59:24 compute-1 nova_compute[228683]: 2025-11-25 09:59:24.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:24 compute-1 nova_compute[228683]: 2025-11-25 09:59:24.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:24 compute-1 nova_compute[228683]: 2025-11-25 09:59:24.902 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:24 compute-1 nova_compute[228683]: 2025-11-25 09:59:24.910 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:24 compute-1 nova_compute[228683]: 2025-11-25 09:59:24.910 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:24 compute-1 nova_compute[228683]: 2025-11-25 09:59:24.910 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:24 compute-1 nova_compute[228683]: 2025-11-25 09:59:24.910 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:59:24 compute-1 nova_compute[228683]: 2025-11-25 09:59:24.911 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:25.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:25 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:59:25 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3985065017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:25 compute-1 nova_compute[228683]: 2025-11-25 09:59:25.244 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:25 compute-1 nova_compute[228683]: 2025-11-25 09:59:25.420 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:59:25 compute-1 nova_compute[228683]: 2025-11-25 09:59:25.421 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4961MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:59:25 compute-1 nova_compute[228683]: 2025-11-25 09:59:25.422 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:25 compute-1 nova_compute[228683]: 2025-11-25 09:59:25.422 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:25.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:25 compute-1 nova_compute[228683]: 2025-11-25 09:59:25.474 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:59:25 compute-1 nova_compute[228683]: 2025-11-25 09:59:25.474 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:59:25 compute-1 nova_compute[228683]: 2025-11-25 09:59:25.488 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:25 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3280660865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:25 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3985065017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:25 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/792236702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:25 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:59:25 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/235832975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:25 compute-1 nova_compute[228683]: 2025-11-25 09:59:25.825 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:25 compute-1 nova_compute[228683]: 2025-11-25 09:59:25.828 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:59:25 compute-1 nova_compute[228683]: 2025-11-25 09:59:25.847 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:59:25 compute-1 nova_compute[228683]: 2025-11-25 09:59:25.848 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:59:25 compute-1 nova_compute[228683]: 2025-11-25 09:59:25.849 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:26 compute-1 ceph-mon[79643]: pgmap v846: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 708 KiB/s rd, 1.4 KiB/s wr, 49 op/s
Nov 25 09:59:26 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/235832975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:26 compute-1 nova_compute[228683]: 2025-11-25 09:59:26.849 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:26 compute-1 nova_compute[228683]: 2025-11-25 09:59:26.849 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:26 compute-1 nova_compute[228683]: 2025-11-25 09:59:26.995 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:27.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:27.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:28 compute-1 ceph-mon[79643]: pgmap v847: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 709 KiB/s rd, 1.4 KiB/s wr, 49 op/s
Nov 25 09:59:28 compute-1 nova_compute[228683]: 2025-11-25 09:59:28.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:28 compute-1 nova_compute[228683]: 2025-11-25 09:59:28.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:29.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:29.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:29 compute-1 nova_compute[228683]: 2025-11-25 09:59:29.904 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:30 compute-1 ceph-mon[79643]: pgmap v848: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 0 op/s
Nov 25 09:59:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:59:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:31.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:31.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:31 compute-1 nova_compute[228683]: 2025-11-25 09:59:31.998 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:32 compute-1 ceph-mon[79643]: pgmap v849: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Nov 25 09:59:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:33.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:33.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:33 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2430379519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:34 compute-1 ceph-mon[79643]: pgmap v850: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:59:34 compute-1 podman[236137]: 2025-11-25 09:59:34.802015298 +0000 UTC m=+0.054738637 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 09:59:34 compute-1 nova_compute[228683]: 2025-11-25 09:59:34.905 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:35.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:35.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:36 compute-1 ceph-mon[79643]: pgmap v851: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:59:37 compute-1 nova_compute[228683]: 2025-11-25 09:59:37.001 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:37.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:37.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:37 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1593650733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:59:37 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/845505631' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:59:38 compute-1 ceph-mon[79643]: pgmap v852: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:59:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:39.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:39.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:39 compute-1 podman[236163]: 2025-11-25 09:59:39.786023585 +0000 UTC m=+0.039708635 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 09:59:39 compute-1 nova_compute[228683]: 2025-11-25 09:59:39.907 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:40 compute-1 ceph-mon[79643]: pgmap v853: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:59:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:41.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:41.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:42 compute-1 nova_compute[228683]: 2025-11-25 09:59:42.004 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:42 compute-1 ceph-mon[79643]: pgmap v854: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:59:42 compute-1 sudo[236181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:59:42 compute-1 sudo[236181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:42 compute-1 sudo[236181]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:43.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:43.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:44 compute-1 ceph-mon[79643]: pgmap v855: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:59:44 compute-1 nova_compute[228683]: 2025-11-25 09:59:44.909 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:45.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:45.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:59:46 compute-1 ceph-mon[79643]: pgmap v856: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:59:47 compute-1 nova_compute[228683]: 2025-11-25 09:59:47.007 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:47.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:47.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:48 compute-1 ceph-mon[79643]: pgmap v857: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:59:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:49.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:49.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:49 compute-1 nova_compute[228683]: 2025-11-25 09:59:49.911 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:50 compute-1 ceph-mon[79643]: pgmap v858: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 09:59:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:51.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:51.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:52 compute-1 nova_compute[228683]: 2025-11-25 09:59:52.009 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:52 compute-1 ceph-mon[79643]: pgmap v859: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Nov 25 09:59:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:53.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:53.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:54 compute-1 ceph-mon[79643]: pgmap v860: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:59:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1439725811' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:59:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1439725811' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:59:54 compute-1 nova_compute[228683]: 2025-11-25 09:59:54.914 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:55.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:55.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:55 compute-1 podman[236213]: 2025-11-25 09:59:55.781880489 +0000 UTC m=+0.036776958 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:59:56 compute-1 ceph-mon[79643]: pgmap v861: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:59:57 compute-1 nova_compute[228683]: 2025-11-25 09:59:57.012 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:57.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:57.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:58 compute-1 ceph-mon[79643]: pgmap v862: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:59:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:59.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 09:59:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:59.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:59 compute-1 nova_compute[228683]: 2025-11-25 09:59:59.916 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:00 compute-1 ceph-mon[79643]: pgmap v863: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 10:00:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:00:00 compute-1 ceph-mon[79643]: overall HEALTH_OK
Nov 25 10:00:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:01.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:01.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:02 compute-1 nova_compute[228683]: 2025-11-25 10:00:02.014 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:02 compute-1 sudo[236232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:00:02 compute-1 sudo[236232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:02 compute-1 sudo[236232]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:02 compute-1 ceph-mon[79643]: pgmap v864: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 25 10:00:02 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2758883569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:03.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:00:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:03.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:00:04 compute-1 ceph-mon[79643]: pgmap v865: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 18 KiB/s wr, 2 op/s
Nov 25 10:00:04 compute-1 nova_compute[228683]: 2025-11-25 10:00:04.916 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:05.003 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:05.004 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:05.004 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:05.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:05.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:05 compute-1 podman[236259]: 2025-11-25 10:00:05.800915273 +0000 UTC m=+0.054994398 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:00:06 compute-1 ceph-mon[79643]: pgmap v866: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 18 KiB/s wr, 2 op/s
Nov 25 10:00:07 compute-1 nova_compute[228683]: 2025-11-25 10:00:07.016 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:07.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:07.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:08 compute-1 ceph-mon[79643]: pgmap v867: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 19 KiB/s wr, 29 op/s
Nov 25 10:00:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:09.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:09.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:09 compute-1 nova_compute[228683]: 2025-11-25 10:00:09.917 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:10 compute-1 podman[236284]: 2025-11-25 10:00:10.787131712 +0000 UTC m=+0.041926114 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 10:00:10 compute-1 ceph-mon[79643]: pgmap v868: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Nov 25 10:00:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:11.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:11.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:12 compute-1 nova_compute[228683]: 2025-11-25 10:00:12.019 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:12 compute-1 ceph-mon[79643]: pgmap v869: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Nov 25 10:00:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:13.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:13.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:14 compute-1 ceph-mon[79643]: pgmap v870: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 10:00:14 compute-1 nova_compute[228683]: 2025-11-25 10:00:14.918 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:15.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:15 compute-1 ceph-osd[77354]: bluestore.MempoolThread fragmentation_score=0.000153 took=0.000046s
Nov 25 10:00:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:15.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:00:16 compute-1 ceph-mon[79643]: pgmap v871: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 10:00:17 compute-1 nova_compute[228683]: 2025-11-25 10:00:17.022 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:17.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:17.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:17 compute-1 sudo[236305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:00:17 compute-1 sudo[236305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:17 compute-1 sudo[236305]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:17 compute-1 sudo[236330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:00:17 compute-1 sudo[236330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:18 compute-1 sudo[236330]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:18 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:18.462 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:00:18 compute-1 nova_compute[228683]: 2025-11-25 10:00:18.462 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:18 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:18.463 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 10:00:18 compute-1 ceph-mon[79643]: pgmap v872: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 10:00:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:19.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:19.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:19 compute-1 nova_compute[228683]: 2025-11-25 10:00:19.920 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:20 compute-1 ceph-mon[79643]: pgmap v873: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:00:20 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:20 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:20 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:00:20 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:00:20 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:20 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:20 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:00:20 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:00:20 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:00:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:21.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:21.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:21 compute-1 ceph-mon[79643]: pgmap v874: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 580 B/s rd, 0 op/s
Nov 25 10:00:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/4117408647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:21 compute-1 ceph-mon[79643]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 25 10:00:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2509102504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:21 compute-1 nova_compute[228683]: 2025-11-25 10:00:21.679 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "60f54767-63c6-411b-9e17-ab15032acf8f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:21 compute-1 nova_compute[228683]: 2025-11-25 10:00:21.679 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:21 compute-1 nova_compute[228683]: 2025-11-25 10:00:21.690 228687 DEBUG nova.compute.manager [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 10:00:21 compute-1 nova_compute[228683]: 2025-11-25 10:00:21.736 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:21 compute-1 nova_compute[228683]: 2025-11-25 10:00:21.737 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:21 compute-1 nova_compute[228683]: 2025-11-25 10:00:21.741 228687 DEBUG nova.virt.hardware [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 10:00:21 compute-1 nova_compute[228683]: 2025-11-25 10:00:21.741 228687 INFO nova.compute.claims [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Claim successful on node compute-1.ctlplane.example.com
Nov 25 10:00:21 compute-1 nova_compute[228683]: 2025-11-25 10:00:21.823 228687 DEBUG oslo_concurrency.processutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.025 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:00:22 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/533156285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.165 228687 DEBUG oslo_concurrency.processutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.169 228687 DEBUG nova.compute.provider_tree [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.182 228687 DEBUG nova.scheduler.client.report [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.194 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.194 228687 DEBUG nova.compute.manager [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.229 228687 DEBUG nova.compute.manager [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.229 228687 DEBUG nova.network.neutron [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 10:00:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.246 228687 INFO nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.262 228687 DEBUG nova.compute.manager [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.322 228687 DEBUG nova.compute.manager [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.323 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.323 228687 INFO nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Creating image(s)
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.339 228687 DEBUG nova.storage.rbd_utils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 60f54767-63c6-411b-9e17-ab15032acf8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.353 228687 DEBUG nova.storage.rbd_utils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 60f54767-63c6-411b-9e17-ab15032acf8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.368 228687 DEBUG nova.storage.rbd_utils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 60f54767-63c6-411b-9e17-ab15032acf8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.370 228687 DEBUG oslo_concurrency.processutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.414 228687 DEBUG oslo_concurrency.processutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.415 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.415 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.416 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.431 228687 DEBUG nova.storage.rbd_utils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 60f54767-63c6-411b-9e17-ab15032acf8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.433 228687 DEBUG oslo_concurrency.processutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 60f54767-63c6-411b-9e17-ab15032acf8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.562 228687 DEBUG oslo_concurrency.processutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 60f54767-63c6-411b-9e17-ab15032acf8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.607 228687 DEBUG nova.storage.rbd_utils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] resizing rbd image 60f54767-63c6-411b-9e17-ab15032acf8f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.659 228687 DEBUG nova.objects.instance [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'migration_context' on Instance uuid 60f54767-63c6-411b-9e17-ab15032acf8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.669 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.669 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Ensure instance console log exists: /var/lib/nova/instances/60f54767-63c6-411b-9e17-ab15032acf8f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.669 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.669 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.670 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/533156285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:22 compute-1 sudo[236574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:00:22 compute-1 sudo[236574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:22 compute-1 sudo[236574]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.889 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:22 compute-1 nova_compute[228683]: 2025-11-25 10:00:22.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:00:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:23.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:23 compute-1 sudo[236600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:00:23 compute-1 sudo[236600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:23 compute-1 sudo[236600]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:23 compute-1 nova_compute[228683]: 2025-11-25 10:00:23.374 228687 DEBUG nova.policy [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c92fada0e9fc4e9482d24b33b311d806', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 10:00:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:23.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:23 compute-1 ceph-mon[79643]: pgmap v875: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 580 B/s rd, 0 op/s
Nov 25 10:00:23 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:23 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:24 compute-1 nova_compute[228683]: 2025-11-25 10:00:24.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:24 compute-1 nova_compute[228683]: 2025-11-25 10:00:24.909 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:24 compute-1 nova_compute[228683]: 2025-11-25 10:00:24.910 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:24 compute-1 nova_compute[228683]: 2025-11-25 10:00:24.910 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:24 compute-1 nova_compute[228683]: 2025-11-25 10:00:24.910 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:00:24 compute-1 nova_compute[228683]: 2025-11-25 10:00:24.910 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:00:24 compute-1 nova_compute[228683]: 2025-11-25 10:00:24.922 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:25.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:25 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:00:25 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3339167886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:25 compute-1 nova_compute[228683]: 2025-11-25 10:00:25.250 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:00:25 compute-1 nova_compute[228683]: 2025-11-25 10:00:25.433 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:00:25 compute-1 nova_compute[228683]: 2025-11-25 10:00:25.434 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4939MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:00:25 compute-1 nova_compute[228683]: 2025-11-25 10:00:25.434 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:25 compute-1 nova_compute[228683]: 2025-11-25 10:00:25.435 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:25 compute-1 nova_compute[228683]: 2025-11-25 10:00:25.474 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Instance 60f54767-63c6-411b-9e17-ab15032acf8f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:00:25 compute-1 nova_compute[228683]: 2025-11-25 10:00:25.475 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:00:25 compute-1 nova_compute[228683]: 2025-11-25 10:00:25.475 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:00:25 compute-1 nova_compute[228683]: 2025-11-25 10:00:25.501 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:00:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:25.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:25 compute-1 ceph-mon[79643]: pgmap v876: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 580 B/s rd, 0 op/s
Nov 25 10:00:25 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3339167886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:25 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:00:25 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/121840190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:25 compute-1 nova_compute[228683]: 2025-11-25 10:00:25.839 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:00:25 compute-1 nova_compute[228683]: 2025-11-25 10:00:25.843 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:00:25 compute-1 nova_compute[228683]: 2025-11-25 10:00:25.858 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:00:25 compute-1 nova_compute[228683]: 2025-11-25 10:00:25.875 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:00:25 compute-1 nova_compute[228683]: 2025-11-25 10:00:25.875 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:26 compute-1 nova_compute[228683]: 2025-11-25 10:00:26.434 228687 DEBUG nova.network.neutron [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Successfully created port: 521cf1b3-0c01-4af0-8577-970d4c4bf811 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 10:00:26 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:26.464 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ad0cdb86-b3c6-44c6-a890-1db2efa57d2b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:00:26 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4105020736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:26 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/121840190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:26 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/455567082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:26 compute-1 podman[236670]: 2025-11-25 10:00:26.78908502 +0000 UTC m=+0.041495964 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 10:00:26 compute-1 nova_compute[228683]: 2025-11-25 10:00:26.875 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:26 compute-1 nova_compute[228683]: 2025-11-25 10:00:26.875 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:00:26 compute-1 nova_compute[228683]: 2025-11-25 10:00:26.876 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:00:26 compute-1 nova_compute[228683]: 2025-11-25 10:00:26.887 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 10:00:26 compute-1 nova_compute[228683]: 2025-11-25 10:00:26.888 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:00:26 compute-1 nova_compute[228683]: 2025-11-25 10:00:26.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:26 compute-1 nova_compute[228683]: 2025-11-25 10:00:26.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:27 compute-1 nova_compute[228683]: 2025-11-25 10:00:27.022 228687 DEBUG nova.network.neutron [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Successfully updated port: 521cf1b3-0c01-4af0-8577-970d4c4bf811 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 10:00:27 compute-1 nova_compute[228683]: 2025-11-25 10:00:27.027 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:27 compute-1 nova_compute[228683]: 2025-11-25 10:00:27.031 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:00:27 compute-1 nova_compute[228683]: 2025-11-25 10:00:27.031 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:00:27 compute-1 nova_compute[228683]: 2025-11-25 10:00:27.031 228687 DEBUG nova.network.neutron [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 10:00:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:27.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:27 compute-1 nova_compute[228683]: 2025-11-25 10:00:27.094 228687 DEBUG nova.compute.manager [req-229f2d8a-5ba5-4c16-a89d-903764966c3a req-a00e498b-4660-4586-9402-193d5f6ebe68 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received event network-changed-521cf1b3-0c01-4af0-8577-970d4c4bf811 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:00:27 compute-1 nova_compute[228683]: 2025-11-25 10:00:27.095 228687 DEBUG nova.compute.manager [req-229f2d8a-5ba5-4c16-a89d-903764966c3a req-a00e498b-4660-4586-9402-193d5f6ebe68 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Refreshing instance network info cache due to event network-changed-521cf1b3-0c01-4af0-8577-970d4c4bf811. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 10:00:27 compute-1 nova_compute[228683]: 2025-11-25 10:00:27.095 228687 DEBUG oslo_concurrency.lockutils [req-229f2d8a-5ba5-4c16-a89d-903764966c3a req-a00e498b-4660-4586-9402-193d5f6ebe68 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:00:27 compute-1 nova_compute[228683]: 2025-11-25 10:00:27.128 228687 DEBUG nova.network.neutron [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 10:00:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:27.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:27 compute-1 ceph-mon[79643]: pgmap v877: 337 pgs: 337 active+clean; 67 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 902 KiB/s wr, 15 op/s
Nov 25 10:00:27 compute-1 nova_compute[228683]: 2025-11-25 10:00:27.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.043 228687 DEBUG nova.network.neutron [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Updating instance_info_cache with network_info: [{"id": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "address": "fa:16:3e:5f:8a:bd", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521cf1b3-0c", "ovs_interfaceid": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.057 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.057 228687 DEBUG nova.compute.manager [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Instance network_info: |[{"id": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "address": "fa:16:3e:5f:8a:bd", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521cf1b3-0c", "ovs_interfaceid": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.057 228687 DEBUG oslo_concurrency.lockutils [req-229f2d8a-5ba5-4c16-a89d-903764966c3a req-a00e498b-4660-4586-9402-193d5f6ebe68 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.057 228687 DEBUG nova.network.neutron [req-229f2d8a-5ba5-4c16-a89d-903764966c3a req-a00e498b-4660-4586-9402-193d5f6ebe68 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Refreshing network info cache for port 521cf1b3-0c01-4af0-8577-970d4c4bf811 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.059 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Start _get_guest_xml network_info=[{"id": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "address": "fa:16:3e:5f:8a:bd", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521cf1b3-0c", "ovs_interfaceid": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'boot_index': 0, 'device_name': '/dev/vda', 'image_id': '62ddd1b7-1bba-493e-a10f-b03a12ab3457'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.062 228687 WARNING nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.065 228687 DEBUG nova.virt.libvirt.host [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.065 228687 DEBUG nova.virt.libvirt.host [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.070 228687 DEBUG nova.virt.libvirt.host [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.070 228687 DEBUG nova.virt.libvirt.host [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.070 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.070 228687 DEBUG nova.virt.hardware [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T09:51:47Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d76f382e-b0e4-4c25-9fed-0129b4e3facf',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.071 228687 DEBUG nova.virt.hardware [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.071 228687 DEBUG nova.virt.hardware [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.071 228687 DEBUG nova.virt.hardware [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.071 228687 DEBUG nova.virt.hardware [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.072 228687 DEBUG nova.virt.hardware [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.072 228687 DEBUG nova.virt.hardware [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.072 228687 DEBUG nova.virt.hardware [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.072 228687 DEBUG nova.virt.hardware [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.072 228687 DEBUG nova.virt.hardware [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.072 228687 DEBUG nova.virt.hardware [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.074 228687 DEBUG oslo_concurrency.processutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:00:28 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 10:00:28 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4163467953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.411 228687 DEBUG oslo_concurrency.processutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.431 228687 DEBUG nova.storage.rbd_utils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 60f54767-63c6-411b-9e17-ab15032acf8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.434 228687 DEBUG oslo_concurrency.processutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:00:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/4163467953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 10:00:28 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 10:00:28 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4112314602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.773 228687 DEBUG oslo_concurrency.processutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.774 228687 DEBUG nova.virt.libvirt.vif [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T10:00:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-837733532',display_name='tempest-TestNetworkBasicOps-server-837733532',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-837733532',id=11,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNiukP4wxZiApB9x/2kRtw/PJfNYSOLoES4vph0I/movDdqr9yG1icdy3J2mdNql0MNqHkYUhtUAIh5xPJgMgrHhQZvTHh+6yC7gk4N+GIh9otUDHxqbeYdj0RE6MUSzBw==',key_name='tempest-TestNetworkBasicOps-1440092039',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-jo5ud5jr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T10:00:22Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=60f54767-63c6-411b-9e17-ab15032acf8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "address": "fa:16:3e:5f:8a:bd", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521cf1b3-0c", "ovs_interfaceid": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.775 228687 DEBUG nova.network.os_vif_util [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "address": "fa:16:3e:5f:8a:bd", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521cf1b3-0c", "ovs_interfaceid": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.776 228687 DEBUG nova.network.os_vif_util [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:8a:bd,bridge_name='br-int',has_traffic_filtering=True,id=521cf1b3-0c01-4af0-8577-970d4c4bf811,network=Network(3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521cf1b3-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.777 228687 DEBUG nova.objects.instance [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'pci_devices' on Instance uuid 60f54767-63c6-411b-9e17-ab15032acf8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.787 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 10:00:28 compute-1 nova_compute[228683]:   <uuid>60f54767-63c6-411b-9e17-ab15032acf8f</uuid>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   <name>instance-0000000b</name>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   <memory>131072</memory>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   <vcpu>1</vcpu>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   <metadata>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <nova:name>tempest-TestNetworkBasicOps-server-837733532</nova:name>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <nova:creationTime>2025-11-25 10:00:28</nova:creationTime>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <nova:flavor name="m1.nano">
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <nova:memory>128</nova:memory>
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <nova:disk>1</nova:disk>
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <nova:swap>0</nova:swap>
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <nova:vcpus>1</nova:vcpus>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       </nova:flavor>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <nova:owner>
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       </nova:owner>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <nova:ports>
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <nova:port uuid="521cf1b3-0c01-4af0-8577-970d4c4bf811">
Nov 25 10:00:28 compute-1 nova_compute[228683]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:         </nova:port>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       </nova:ports>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     </nova:instance>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   </metadata>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   <sysinfo type="smbios">
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <system>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <entry name="manufacturer">RDO</entry>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <entry name="product">OpenStack Compute</entry>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <entry name="serial">60f54767-63c6-411b-9e17-ab15032acf8f</entry>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <entry name="uuid">60f54767-63c6-411b-9e17-ab15032acf8f</entry>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <entry name="family">Virtual Machine</entry>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     </system>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   </sysinfo>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   <os>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <boot dev="hd"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <smbios mode="sysinfo"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   </os>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   <features>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <acpi/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <apic/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <vmcoreinfo/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   </features>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   <clock offset="utc">
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <timer name="hpet" present="no"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   </clock>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   <cpu mode="host-model" match="exact">
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   </cpu>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   <devices>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <disk type="network" device="disk">
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <driver type="raw" cache="none"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <source protocol="rbd" name="vms/60f54767-63c6-411b-9e17-ab15032acf8f_disk">
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <host name="192.168.122.100" port="6789"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <host name="192.168.122.102" port="6789"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <host name="192.168.122.101" port="6789"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       </source>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <auth username="openstack">
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       </auth>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <target dev="vda" bus="virtio"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     </disk>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <disk type="network" device="cdrom">
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <driver type="raw" cache="none"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <source protocol="rbd" name="vms/60f54767-63c6-411b-9e17-ab15032acf8f_disk.config">
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <host name="192.168.122.100" port="6789"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <host name="192.168.122.102" port="6789"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <host name="192.168.122.101" port="6789"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       </source>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <auth username="openstack">
Nov 25 10:00:28 compute-1 nova_compute[228683]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       </auth>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <target dev="sda" bus="sata"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     </disk>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <interface type="ethernet">
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <mac address="fa:16:3e:5f:8a:bd"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <model type="virtio"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <mtu size="1442"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <target dev="tap521cf1b3-0c"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     </interface>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <serial type="pty">
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <log file="/var/lib/nova/instances/60f54767-63c6-411b-9e17-ab15032acf8f/console.log" append="off"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     </serial>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <video>
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <model type="virtio"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     </video>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <input type="tablet" bus="usb"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <rng model="virtio">
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <backend model="random">/dev/urandom</backend>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     </rng>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <controller type="usb" index="0"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     <memballoon model="virtio">
Nov 25 10:00:28 compute-1 nova_compute[228683]:       <stats period="10"/>
Nov 25 10:00:28 compute-1 nova_compute[228683]:     </memballoon>
Nov 25 10:00:28 compute-1 nova_compute[228683]:   </devices>
Nov 25 10:00:28 compute-1 nova_compute[228683]: </domain>
Nov 25 10:00:28 compute-1 nova_compute[228683]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.788 228687 DEBUG nova.compute.manager [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Preparing to wait for external event network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.789 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.789 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.789 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.790 228687 DEBUG nova.virt.libvirt.vif [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T10:00:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-837733532',display_name='tempest-TestNetworkBasicOps-server-837733532',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-837733532',id=11,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNiukP4wxZiApB9x/2kRtw/PJfNYSOLoES4vph0I/movDdqr9yG1icdy3J2mdNql0MNqHkYUhtUAIh5xPJgMgrHhQZvTHh+6yC7gk4N+GIh9otUDHxqbeYdj0RE6MUSzBw==',key_name='tempest-TestNetworkBasicOps-1440092039',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-jo5ud5jr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T10:00:22Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=60f54767-63c6-411b-9e17-ab15032acf8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "address": "fa:16:3e:5f:8a:bd", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521cf1b3-0c", "ovs_interfaceid": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.790 228687 DEBUG nova.network.os_vif_util [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "address": "fa:16:3e:5f:8a:bd", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521cf1b3-0c", "ovs_interfaceid": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.790 228687 DEBUG nova.network.os_vif_util [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:8a:bd,bridge_name='br-int',has_traffic_filtering=True,id=521cf1b3-0c01-4af0-8577-970d4c4bf811,network=Network(3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521cf1b3-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.791 228687 DEBUG os_vif [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:8a:bd,bridge_name='br-int',has_traffic_filtering=True,id=521cf1b3-0c01-4af0-8577-970d4c4bf811,network=Network(3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521cf1b3-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.791 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.791 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.792 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.794 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.794 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap521cf1b3-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.795 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap521cf1b3-0c, col_values=(('external_ids', {'iface-id': '521cf1b3-0c01-4af0-8577-970d4c4bf811', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:8a:bd', 'vm-uuid': '60f54767-63c6-411b-9e17-ab15032acf8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.796 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:28 compute-1 NetworkManager[48856]: <info>  [1764064828.7971] manager: (tap521cf1b3-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.798 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.801 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.802 228687 INFO os_vif [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:8a:bd,bridge_name='br-int',has_traffic_filtering=True,id=521cf1b3-0c01-4af0-8577-970d4c4bf811,network=Network(3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521cf1b3-0c')
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.831 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.831 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.831 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:5f:8a:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.832 228687 INFO nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Using config drive
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.849 228687 DEBUG nova.storage.rbd_utils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 60f54767-63c6-411b-9e17-ab15032acf8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:28 compute-1 nova_compute[228683]: 2025-11-25 10:00:28.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:29.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.097 228687 INFO nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Creating config drive at /var/lib/nova/instances/60f54767-63c6-411b-9e17-ab15032acf8f/disk.config
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.101 228687 DEBUG oslo_concurrency.processutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/60f54767-63c6-411b-9e17-ab15032acf8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpixtlnpzm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.141 228687 DEBUG nova.network.neutron [req-229f2d8a-5ba5-4c16-a89d-903764966c3a req-a00e498b-4660-4586-9402-193d5f6ebe68 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Updated VIF entry in instance network info cache for port 521cf1b3-0c01-4af0-8577-970d4c4bf811. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.141 228687 DEBUG nova.network.neutron [req-229f2d8a-5ba5-4c16-a89d-903764966c3a req-a00e498b-4660-4586-9402-193d5f6ebe68 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Updating instance_info_cache with network_info: [{"id": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "address": "fa:16:3e:5f:8a:bd", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521cf1b3-0c", "ovs_interfaceid": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.157 228687 DEBUG oslo_concurrency.lockutils [req-229f2d8a-5ba5-4c16-a89d-903764966c3a req-a00e498b-4660-4586-9402-193d5f6ebe68 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.220 228687 DEBUG oslo_concurrency.processutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/60f54767-63c6-411b-9e17-ab15032acf8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpixtlnpzm" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.239 228687 DEBUG nova.storage.rbd_utils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 60f54767-63c6-411b-9e17-ab15032acf8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.241 228687 DEBUG oslo_concurrency.processutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/60f54767-63c6-411b-9e17-ab15032acf8f/disk.config 60f54767-63c6-411b-9e17-ab15032acf8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.321 228687 DEBUG oslo_concurrency.processutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/60f54767-63c6-411b-9e17-ab15032acf8f/disk.config 60f54767-63c6-411b-9e17-ab15032acf8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.322 228687 INFO nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Deleting local config drive /var/lib/nova/instances/60f54767-63c6-411b-9e17-ab15032acf8f/disk.config because it was imported into RBD.
Nov 25 10:00:29 compute-1 systemd[1]: Starting libvirt secret daemon...
Nov 25 10:00:29 compute-1 systemd[1]: Started libvirt secret daemon.
Nov 25 10:00:29 compute-1 kernel: tap521cf1b3-0c: entered promiscuous mode
Nov 25 10:00:29 compute-1 NetworkManager[48856]: <info>  [1764064829.3904] manager: (tap521cf1b3-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Nov 25 10:00:29 compute-1 ovn_controller[133620]: 2025-11-25T10:00:29Z|00058|binding|INFO|Claiming lport 521cf1b3-0c01-4af0-8577-970d4c4bf811 for this chassis.
Nov 25 10:00:29 compute-1 ovn_controller[133620]: 2025-11-25T10:00:29Z|00059|binding|INFO|521cf1b3-0c01-4af0-8577-970d4c4bf811: Claiming fa:16:3e:5f:8a:bd 10.100.0.5
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.391 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.402 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:8a:bd 10.100.0.5'], port_security=['fa:16:3e:5f:8a:bd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '60f54767-63c6-411b-9e17-ab15032acf8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5ef8d5af-a9c0-4daa-a483-9d737f626c12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6121cf32-17ed-44cd-a0b1-25d4c69fcad0, chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>], logical_port=521cf1b3-0c01-4af0-8577-970d4c4bf811) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.403 142940 INFO neutron.agent.ovn.metadata.agent [-] Port 521cf1b3-0c01-4af0-8577-970d4c4bf811 in datapath 3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc bound to our chassis
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.404 142940 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.413 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[e355aa26-020f-416c-840e-b39c420f993b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.414 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3ce4f6a0-b1 in ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.415 231684 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3ce4f6a0-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.415 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[2f9056a5-8d43-41a6-8d5e-3db653dbc640]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.416 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[d786aef5-bf8a-494b-9ef5-546ea1d55267]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 systemd-machined[192680]: New machine qemu-4-instance-0000000b.
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.427 143047 DEBUG oslo.privsep.daemon [-] privsep: reply[9462b160-b9fa-47be-9f9b-e2e8b6c72774]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 systemd[1]: Started Virtual Machine qemu-4-instance-0000000b.
Nov 25 10:00:29 compute-1 systemd-udevd[236845]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.448 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[deb3b57a-6119-432c-9045-836f3d72786b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 NetworkManager[48856]: <info>  [1764064829.4607] device (tap521cf1b3-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 10:00:29 compute-1 NetworkManager[48856]: <info>  [1764064829.4616] device (tap521cf1b3-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.467 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:29 compute-1 ovn_controller[133620]: 2025-11-25T10:00:29Z|00060|binding|INFO|Setting lport 521cf1b3-0c01-4af0-8577-970d4c4bf811 ovn-installed in OVS
Nov 25 10:00:29 compute-1 ovn_controller[133620]: 2025-11-25T10:00:29Z|00061|binding|INFO|Setting lport 521cf1b3-0c01-4af0-8577-970d4c4bf811 up in Southbound
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.471 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[52d69e16-ff46-443e-ba1c-ead5053d6a6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.474 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[a8032a72-a9ca-4a4c-81de-c04cec4de7cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.474 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:29 compute-1 NetworkManager[48856]: <info>  [1764064829.4773] manager: (tap3ce4f6a0-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.504 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[e4cd1dce-06f9-4f60-9ca1-1ee22a1f327e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.506 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9d736d-0ceb-483f-a4c6-c1356dfae2fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 NetworkManager[48856]: <info>  [1764064829.5220] device (tap3ce4f6a0-b0): carrier: link connected
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.526 231741 DEBUG oslo.privsep.daemon [-] privsep: reply[bae26237-43a8-4590-b11c-c2abf506da26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.538 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[515d1acb-f383-4c1f-a15f-493264a741a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3ce4f6a0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:47:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 361407, 'reachable_time': 37934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236867, 'error': None, 'target': 'ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:29.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.552 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca6f730-d5df-4b36-8db6-933e3f295815]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:47f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 361407, 'tstamp': 361407}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236868, 'error': None, 'target': 'ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.563 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa99c33-0b12-474f-9554-025b67077737]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3ce4f6a0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:47:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 361407, 'reachable_time': 37934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 236869, 'error': None, 'target': 'ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.585 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[025a6d5a-a927-44c7-8ba9-1449ca767bca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.596 228687 DEBUG nova.compute.manager [req-e27bccf5-5003-4636-804a-6d9fa13b880e req-b0e28449-96ab-4283-9b6e-b26dd8ee5f53 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received event network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.596 228687 DEBUG oslo_concurrency.lockutils [req-e27bccf5-5003-4636-804a-6d9fa13b880e req-b0e28449-96ab-4283-9b6e-b26dd8ee5f53 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.596 228687 DEBUG oslo_concurrency.lockutils [req-e27bccf5-5003-4636-804a-6d9fa13b880e req-b0e28449-96ab-4283-9b6e-b26dd8ee5f53 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.597 228687 DEBUG oslo_concurrency.lockutils [req-e27bccf5-5003-4636-804a-6d9fa13b880e req-b0e28449-96ab-4283-9b6e-b26dd8ee5f53 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.597 228687 DEBUG nova.compute.manager [req-e27bccf5-5003-4636-804a-6d9fa13b880e req-b0e28449-96ab-4283-9b6e-b26dd8ee5f53 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Processing event network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.624 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[9931edb4-5926-4171-b606-f61ad3c96dea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.625 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ce4f6a0-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.625 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.626 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ce4f6a0-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:00:29 compute-1 NetworkManager[48856]: <info>  [1764064829.6280] manager: (tap3ce4f6a0-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 25 10:00:29 compute-1 kernel: tap3ce4f6a0-b0: entered promiscuous mode
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.629 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.630 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3ce4f6a0-b0, col_values=(('external_ids', {'iface-id': '5755bc32-958f-433d-9a4f-77334dafcf22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:00:29 compute-1 ovn_controller[133620]: 2025-11-25T10:00:29Z|00062|binding|INFO|Releasing lport 5755bc32-958f-433d-9a4f-77334dafcf22 from this chassis (sb_readonly=0)
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.632 142940 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.632 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[84c35d43-afc5-4c9c-b0b5-f2db7c496261]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.633 142940 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: global
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     log         /dev/log local0 debug
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     log-tag     haproxy-metadata-proxy-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     user        root
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     group       root
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     maxconn     1024
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     pidfile     /var/lib/neutron/external/pids/3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc.pid.haproxy
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     daemon
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: defaults
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     log global
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     mode http
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     option httplog
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     option dontlognull
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     option http-server-close
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     option forwardfor
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     retries                 3
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     timeout http-request    30s
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     timeout connect         30s
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     timeout client          32s
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     timeout server          32s
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     timeout http-keep-alive 30s
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: listen listener
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     bind 169.254.169.254:80
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:     http-request add-header X-OVN-Network-ID 3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 10:00:29 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:00:29.633 142940 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'env', 'PROCESS_TAG=haproxy-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.647 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:29 compute-1 ceph-mon[79643]: pgmap v878: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 25 10:00:29 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/4112314602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.874 228687 DEBUG nova.compute.manager [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.875 228687 DEBUG nova.virt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Emitting event <LifecycleEvent: 1764064829.8743608, 60f54767-63c6-411b-9e17-ab15032acf8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.875 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] VM Started (Lifecycle Event)
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.881 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.887 228687 INFO nova.virt.libvirt.driver [-] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Instance spawned successfully.
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.887 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.891 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.893 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.904 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.904 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.905 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.905 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.906 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.906 228687 DEBUG nova.virt.libvirt.driver [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.923 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:29 compute-1 podman[236939]: 2025-11-25 10:00:29.93493677 +0000 UTC m=+0.038935967 container create 304dc111a7ad42ad55059da5fa788fc2fc3e2e56e869ad155f4f57107993234d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.939 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.941 228687 DEBUG nova.virt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Emitting event <LifecycleEvent: 1764064829.8750844, 60f54767-63c6-411b-9e17-ab15032acf8f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.942 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] VM Paused (Lifecycle Event)
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.961 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:00:29 compute-1 systemd[1]: Started libpod-conmon-304dc111a7ad42ad55059da5fa788fc2fc3e2e56e869ad155f4f57107993234d.scope.
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.964 228687 DEBUG nova.virt.driver [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] Emitting event <LifecycleEvent: 1764064829.87919, 60f54767-63c6-411b-9e17-ab15032acf8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.964 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] VM Resumed (Lifecycle Event)
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.968 228687 INFO nova.compute.manager [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Took 7.65 seconds to spawn the instance on the hypervisor.
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.969 228687 DEBUG nova.compute.manager [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.978 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.980 228687 DEBUG nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 10:00:29 compute-1 systemd[1]: Started libcrun container.
Nov 25 10:00:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dae0c866ac2c06f6ca318e66e7f7a6657569f89ef9fbe15160807790e79ead4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 10:00:29 compute-1 podman[236939]: 2025-11-25 10:00:29.994195086 +0000 UTC m=+0.098194293 container init 304dc111a7ad42ad55059da5fa788fc2fc3e2e56e869ad155f4f57107993234d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:00:29 compute-1 nova_compute[228683]: 2025-11-25 10:00:29.995 228687 INFO nova.compute.manager [None req-0322de71-9a8c-461a-ac6c-4f84a50f7066 - - - - - -] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 10:00:29 compute-1 podman[236939]: 2025-11-25 10:00:29.999027504 +0000 UTC m=+0.103026702 container start 304dc111a7ad42ad55059da5fa788fc2fc3e2e56e869ad155f4f57107993234d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:00:30 compute-1 podman[236939]: 2025-11-25 10:00:29.920363579 +0000 UTC m=+0.024362797 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 10:00:30 compute-1 neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc[236951]: [NOTICE]   (236955) : New worker (236957) forked
Nov 25 10:00:30 compute-1 neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc[236951]: [NOTICE]   (236955) : Loading success.
Nov 25 10:00:30 compute-1 nova_compute[228683]: 2025-11-25 10:00:30.019 228687 INFO nova.compute.manager [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Took 8.30 seconds to build instance.
Nov 25 10:00:30 compute-1 nova_compute[228683]: 2025-11-25 10:00:30.031 228687 DEBUG oslo_concurrency.lockutils [None req-4cae3b44-0080-4b04-9ced-875c34328c51 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:30 compute-1 nova_compute[228683]: 2025-11-25 10:00:30.890 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:00:30 compute-1 ceph-mon[79643]: pgmap v879: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 25 10:00:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:31.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:31.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:31 compute-1 ovn_controller[133620]: 2025-11-25T10:00:31Z|00063|binding|INFO|Releasing lport 5755bc32-958f-433d-9a4f-77334dafcf22 from this chassis (sb_readonly=0)
Nov 25 10:00:31 compute-1 NetworkManager[48856]: <info>  [1764064831.7148] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 25 10:00:31 compute-1 NetworkManager[48856]: <info>  [1764064831.7155] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 25 10:00:31 compute-1 nova_compute[228683]: 2025-11-25 10:00:31.725 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:31 compute-1 nova_compute[228683]: 2025-11-25 10:00:31.754 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:31 compute-1 ovn_controller[133620]: 2025-11-25T10:00:31Z|00064|binding|INFO|Releasing lport 5755bc32-958f-433d-9a4f-77334dafcf22 from this chassis (sb_readonly=0)
Nov 25 10:00:31 compute-1 nova_compute[228683]: 2025-11-25 10:00:31.758 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:32 compute-1 nova_compute[228683]: 2025-11-25 10:00:32.063 228687 DEBUG nova.compute.manager [req-4f35a3c2-92e6-4bbb-92d5-3c9b2d4bdffe req-31517192-dbb9-424e-8ea7-7421e6590010 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received event network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:00:32 compute-1 nova_compute[228683]: 2025-11-25 10:00:32.064 228687 DEBUG oslo_concurrency.lockutils [req-4f35a3c2-92e6-4bbb-92d5-3c9b2d4bdffe req-31517192-dbb9-424e-8ea7-7421e6590010 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:32 compute-1 nova_compute[228683]: 2025-11-25 10:00:32.064 228687 DEBUG oslo_concurrency.lockutils [req-4f35a3c2-92e6-4bbb-92d5-3c9b2d4bdffe req-31517192-dbb9-424e-8ea7-7421e6590010 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:32 compute-1 nova_compute[228683]: 2025-11-25 10:00:32.064 228687 DEBUG oslo_concurrency.lockutils [req-4f35a3c2-92e6-4bbb-92d5-3c9b2d4bdffe req-31517192-dbb9-424e-8ea7-7421e6590010 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:32 compute-1 nova_compute[228683]: 2025-11-25 10:00:32.064 228687 DEBUG nova.compute.manager [req-4f35a3c2-92e6-4bbb-92d5-3c9b2d4bdffe req-31517192-dbb9-424e-8ea7-7421e6590010 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] No waiting events found dispatching network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:00:32 compute-1 nova_compute[228683]: 2025-11-25 10:00:32.064 228687 WARNING nova.compute.manager [req-4f35a3c2-92e6-4bbb-92d5-3c9b2d4bdffe req-31517192-dbb9-424e-8ea7-7421e6590010 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received unexpected event network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 for instance with vm_state active and task_state None.
Nov 25 10:00:32 compute-1 nova_compute[228683]: 2025-11-25 10:00:32.064 228687 DEBUG nova.compute.manager [req-4f35a3c2-92e6-4bbb-92d5-3c9b2d4bdffe req-31517192-dbb9-424e-8ea7-7421e6590010 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received event network-changed-521cf1b3-0c01-4af0-8577-970d4c4bf811 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:00:32 compute-1 nova_compute[228683]: 2025-11-25 10:00:32.065 228687 DEBUG nova.compute.manager [req-4f35a3c2-92e6-4bbb-92d5-3c9b2d4bdffe req-31517192-dbb9-424e-8ea7-7421e6590010 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Refreshing instance network info cache due to event network-changed-521cf1b3-0c01-4af0-8577-970d4c4bf811. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 10:00:32 compute-1 nova_compute[228683]: 2025-11-25 10:00:32.065 228687 DEBUG oslo_concurrency.lockutils [req-4f35a3c2-92e6-4bbb-92d5-3c9b2d4bdffe req-31517192-dbb9-424e-8ea7-7421e6590010 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:00:32 compute-1 nova_compute[228683]: 2025-11-25 10:00:32.065 228687 DEBUG oslo_concurrency.lockutils [req-4f35a3c2-92e6-4bbb-92d5-3c9b2d4bdffe req-31517192-dbb9-424e-8ea7-7421e6590010 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:00:32 compute-1 nova_compute[228683]: 2025-11-25 10:00:32.065 228687 DEBUG nova.network.neutron [req-4f35a3c2-92e6-4bbb-92d5-3c9b2d4bdffe req-31517192-dbb9-424e-8ea7-7421e6590010 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Refreshing network info cache for port 521cf1b3-0c01-4af0-8577-970d4c4bf811 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 10:00:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:33.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:33 compute-1 ceph-mon[79643]: pgmap v880: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 10:00:33 compute-1 nova_compute[228683]: 2025-11-25 10:00:33.366 228687 DEBUG nova.network.neutron [req-4f35a3c2-92e6-4bbb-92d5-3c9b2d4bdffe req-31517192-dbb9-424e-8ea7-7421e6590010 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Updated VIF entry in instance network info cache for port 521cf1b3-0c01-4af0-8577-970d4c4bf811. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 10:00:33 compute-1 nova_compute[228683]: 2025-11-25 10:00:33.367 228687 DEBUG nova.network.neutron [req-4f35a3c2-92e6-4bbb-92d5-3c9b2d4bdffe req-31517192-dbb9-424e-8ea7-7421e6590010 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Updating instance_info_cache with network_info: [{"id": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "address": "fa:16:3e:5f:8a:bd", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521cf1b3-0c", "ovs_interfaceid": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:00:33 compute-1 nova_compute[228683]: 2025-11-25 10:00:33.380 228687 DEBUG oslo_concurrency.lockutils [req-4f35a3c2-92e6-4bbb-92d5-3c9b2d4bdffe req-31517192-dbb9-424e-8ea7-7421e6590010 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:00:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:33.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:33 compute-1 nova_compute[228683]: 2025-11-25 10:00:33.797 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:34 compute-1 nova_compute[228683]: 2025-11-25 10:00:34.928 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:35.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:35 compute-1 ceph-mon[79643]: pgmap v881: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 10:00:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:35.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:36 compute-1 podman[236966]: 2025-11-25 10:00:36.825051863 +0000 UTC m=+0.076225370 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller)
Nov 25 10:00:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:37.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:37 compute-1 ceph-mon[79643]: pgmap v882: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 10:00:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:37.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:38 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/928370122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:38 compute-1 nova_compute[228683]: 2025-11-25 10:00:38.800 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:39.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:39 compute-1 ceph-mon[79643]: pgmap v883: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 88 op/s
Nov 25 10:00:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:39.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:39 compute-1 nova_compute[228683]: 2025-11-25 10:00:39.928 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:41.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:41 compute-1 ceph-mon[79643]: pgmap v884: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 10:00:41 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2516742035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 10:00:41 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2678682281' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 10:00:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:41.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:41 compute-1 podman[236994]: 2025-11-25 10:00:41.787137001 +0000 UTC m=+0.041053279 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 10:00:41 compute-1 ovn_controller[133620]: 2025-11-25T10:00:41Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:8a:bd 10.100.0.5
Nov 25 10:00:41 compute-1 ovn_controller[133620]: 2025-11-25T10:00:41Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:8a:bd 10.100.0.5
Nov 25 10:00:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:42 compute-1 sudo[237011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:00:42 compute-1 sudo[237011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:42 compute-1 sudo[237011]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:43.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:43 compute-1 ceph-mon[79643]: pgmap v885: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Nov 25 10:00:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:43.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:43 compute-1 nova_compute[228683]: 2025-11-25 10:00:43.803 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:44 compute-1 nova_compute[228683]: 2025-11-25 10:00:44.930 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:45.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:45 compute-1 ceph-mon[79643]: pgmap v886: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 194 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Nov 25 10:00:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:00:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:45.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:47.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:47 compute-1 ceph-mon[79643]: pgmap v887: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 134 op/s
Nov 25 10:00:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:47.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:48 compute-1 nova_compute[228683]: 2025-11-25 10:00:48.806 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:49.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:49 compute-1 ceph-mon[79643]: pgmap v888: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Nov 25 10:00:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:49.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:49 compute-1 nova_compute[228683]: 2025-11-25 10:00:49.931 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:51.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:51 compute-1 ceph-mon[79643]: pgmap v889: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Nov 25 10:00:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:51.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:53.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:53 compute-1 ceph-mon[79643]: pgmap v890: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Nov 25 10:00:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:53.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:53 compute-1 nova_compute[228683]: 2025-11-25 10:00:53.808 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/3267203020' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:00:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/3267203020' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:00:54 compute-1 nova_compute[228683]: 2025-11-25 10:00:54.933 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:55.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:55 compute-1 ceph-mon[79643]: pgmap v891: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 74 op/s
Nov 25 10:00:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:55.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:57.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:57 compute-1 ceph-mon[79643]: pgmap v892: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Nov 25 10:00:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:57.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:57 compute-1 podman[237044]: 2025-11-25 10:00:57.79102847 +0000 UTC m=+0.040285612 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:00:58 compute-1 nova_compute[228683]: 2025-11-25 10:00:58.812 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:59.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:59 compute-1 ceph-mon[79643]: pgmap v893: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 954 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Nov 25 10:00:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:00:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:59.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:59 compute-1 nova_compute[228683]: 2025-11-25 10:00:59.935 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:01:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:01.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:01 compute-1 ceph-mon[79643]: pgmap v894: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 10:01:01 compute-1 CROND[237064]: (root) CMD (run-parts /etc/cron.hourly)
Nov 25 10:01:01 compute-1 run-parts[237067]: (/etc/cron.hourly) starting 0anacron
Nov 25 10:01:01 compute-1 run-parts[237073]: (/etc/cron.hourly) finished 0anacron
Nov 25 10:01:01 compute-1 CROND[237063]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 25 10:01:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:01.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:02 compute-1 sudo[237074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:01:02 compute-1 sudo[237074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:02 compute-1 sudo[237074]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:03.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:03 compute-1 ceph-mon[79643]: pgmap v895: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Nov 25 10:01:03 compute-1 nova_compute[228683]: 2025-11-25 10:01:03.568 228687 INFO nova.compute.manager [None req-386c03d4-64c3-44cb-b59b-98cff842004d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Get console output
Nov 25 10:01:03 compute-1 nova_compute[228683]: 2025-11-25 10:01:03.572 233344 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 10:01:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:03.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:03 compute-1 nova_compute[228683]: 2025-11-25 10:01:03.814 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:04 compute-1 nova_compute[228683]: 2025-11-25 10:01:04.937 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:04 compute-1 nova_compute[228683]: 2025-11-25 10:01:04.964 228687 DEBUG nova.compute.manager [req-4afd94bd-f1eb-4bf4-8fa0-e1796018643c req-796a5338-3e01-4171-a62f-1729fa10ba84 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received event network-changed-521cf1b3-0c01-4af0-8577-970d4c4bf811 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:01:04 compute-1 nova_compute[228683]: 2025-11-25 10:01:04.965 228687 DEBUG nova.compute.manager [req-4afd94bd-f1eb-4bf4-8fa0-e1796018643c req-796a5338-3e01-4171-a62f-1729fa10ba84 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Refreshing instance network info cache due to event network-changed-521cf1b3-0c01-4af0-8577-970d4c4bf811. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 10:01:04 compute-1 nova_compute[228683]: 2025-11-25 10:01:04.965 228687 DEBUG oslo_concurrency.lockutils [req-4afd94bd-f1eb-4bf4-8fa0-e1796018643c req-796a5338-3e01-4171-a62f-1729fa10ba84 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:01:04 compute-1 nova_compute[228683]: 2025-11-25 10:01:04.965 228687 DEBUG oslo_concurrency.lockutils [req-4afd94bd-f1eb-4bf4-8fa0-e1796018643c req-796a5338-3e01-4171-a62f-1729fa10ba84 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:01:04 compute-1 nova_compute[228683]: 2025-11-25 10:01:04.965 228687 DEBUG nova.network.neutron [req-4afd94bd-f1eb-4bf4-8fa0-e1796018643c req-796a5338-3e01-4171-a62f-1729fa10ba84 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Refreshing network info cache for port 521cf1b3-0c01-4af0-8577-970d4c4bf811 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 10:01:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:05.004 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:01:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:05.005 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:01:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:05.005 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:01:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:05.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:05 compute-1 ceph-mon[79643]: pgmap v896: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 10:01:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:05.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:06 compute-1 nova_compute[228683]: 2025-11-25 10:01:06.026 228687 INFO nova.compute.manager [None req-588497ce-907e-4108-b24e-5d7ad5817565 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Get console output
Nov 25 10:01:06 compute-1 nova_compute[228683]: 2025-11-25 10:01:06.030 233344 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 10:01:06 compute-1 nova_compute[228683]: 2025-11-25 10:01:06.617 228687 DEBUG nova.network.neutron [req-4afd94bd-f1eb-4bf4-8fa0-e1796018643c req-796a5338-3e01-4171-a62f-1729fa10ba84 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Updated VIF entry in instance network info cache for port 521cf1b3-0c01-4af0-8577-970d4c4bf811. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 10:01:06 compute-1 nova_compute[228683]: 2025-11-25 10:01:06.618 228687 DEBUG nova.network.neutron [req-4afd94bd-f1eb-4bf4-8fa0-e1796018643c req-796a5338-3e01-4171-a62f-1729fa10ba84 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Updating instance_info_cache with network_info: [{"id": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "address": "fa:16:3e:5f:8a:bd", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521cf1b3-0c", "ovs_interfaceid": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:01:06 compute-1 nova_compute[228683]: 2025-11-25 10:01:06.633 228687 DEBUG oslo_concurrency.lockutils [req-4afd94bd-f1eb-4bf4-8fa0-e1796018643c req-796a5338-3e01-4171-a62f-1729fa10ba84 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:01:07 compute-1 nova_compute[228683]: 2025-11-25 10:01:07.035 228687 DEBUG nova.compute.manager [req-fbd8d1f4-92cf-4980-88f3-46c96ba91cb2 req-fc193dc5-c8af-4861-994f-45ed48f0be47 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received event network-vif-unplugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:01:07 compute-1 nova_compute[228683]: 2025-11-25 10:01:07.035 228687 DEBUG oslo_concurrency.lockutils [req-fbd8d1f4-92cf-4980-88f3-46c96ba91cb2 req-fc193dc5-c8af-4861-994f-45ed48f0be47 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:01:07 compute-1 nova_compute[228683]: 2025-11-25 10:01:07.036 228687 DEBUG oslo_concurrency.lockutils [req-fbd8d1f4-92cf-4980-88f3-46c96ba91cb2 req-fc193dc5-c8af-4861-994f-45ed48f0be47 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:01:07 compute-1 nova_compute[228683]: 2025-11-25 10:01:07.036 228687 DEBUG oslo_concurrency.lockutils [req-fbd8d1f4-92cf-4980-88f3-46c96ba91cb2 req-fc193dc5-c8af-4861-994f-45ed48f0be47 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:01:07 compute-1 nova_compute[228683]: 2025-11-25 10:01:07.036 228687 DEBUG nova.compute.manager [req-fbd8d1f4-92cf-4980-88f3-46c96ba91cb2 req-fc193dc5-c8af-4861-994f-45ed48f0be47 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] No waiting events found dispatching network-vif-unplugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:01:07 compute-1 nova_compute[228683]: 2025-11-25 10:01:07.036 228687 WARNING nova.compute.manager [req-fbd8d1f4-92cf-4980-88f3-46c96ba91cb2 req-fc193dc5-c8af-4861-994f-45ed48f0be47 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received unexpected event network-vif-unplugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 for instance with vm_state active and task_state None.
Nov 25 10:01:07 compute-1 nova_compute[228683]: 2025-11-25 10:01:07.036 228687 DEBUG nova.compute.manager [req-fbd8d1f4-92cf-4980-88f3-46c96ba91cb2 req-fc193dc5-c8af-4861-994f-45ed48f0be47 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received event network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:01:07 compute-1 nova_compute[228683]: 2025-11-25 10:01:07.037 228687 DEBUG oslo_concurrency.lockutils [req-fbd8d1f4-92cf-4980-88f3-46c96ba91cb2 req-fc193dc5-c8af-4861-994f-45ed48f0be47 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:01:07 compute-1 nova_compute[228683]: 2025-11-25 10:01:07.037 228687 DEBUG oslo_concurrency.lockutils [req-fbd8d1f4-92cf-4980-88f3-46c96ba91cb2 req-fc193dc5-c8af-4861-994f-45ed48f0be47 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:01:07 compute-1 nova_compute[228683]: 2025-11-25 10:01:07.037 228687 DEBUG oslo_concurrency.lockutils [req-fbd8d1f4-92cf-4980-88f3-46c96ba91cb2 req-fc193dc5-c8af-4861-994f-45ed48f0be47 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:01:07 compute-1 nova_compute[228683]: 2025-11-25 10:01:07.037 228687 DEBUG nova.compute.manager [req-fbd8d1f4-92cf-4980-88f3-46c96ba91cb2 req-fc193dc5-c8af-4861-994f-45ed48f0be47 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] No waiting events found dispatching network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:01:07 compute-1 nova_compute[228683]: 2025-11-25 10:01:07.037 228687 WARNING nova.compute.manager [req-fbd8d1f4-92cf-4980-88f3-46c96ba91cb2 req-fc193dc5-c8af-4861-994f-45ed48f0be47 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received unexpected event network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 for instance with vm_state active and task_state None.
Nov 25 10:01:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:07.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:07 compute-1 ceph-mon[79643]: pgmap v897: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Nov 25 10:01:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:07.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:07 compute-1 podman[237102]: 2025-11-25 10:01:07.82849096 +0000 UTC m=+0.080011137 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:01:07 compute-1 nova_compute[228683]: 2025-11-25 10:01:07.836 228687 INFO nova.compute.manager [None req-9ad9dccf-4522-4ada-8228-168748c12e4f c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Get console output
Nov 25 10:01:07 compute-1 nova_compute[228683]: 2025-11-25 10:01:07.839 233344 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 10:01:08 compute-1 nova_compute[228683]: 2025-11-25 10:01:08.816 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:09 compute-1 nova_compute[228683]: 2025-11-25 10:01:09.128 228687 DEBUG nova.compute.manager [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received event network-changed-521cf1b3-0c01-4af0-8577-970d4c4bf811 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:01:09 compute-1 nova_compute[228683]: 2025-11-25 10:01:09.128 228687 DEBUG nova.compute.manager [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Refreshing instance network info cache due to event network-changed-521cf1b3-0c01-4af0-8577-970d4c4bf811. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 10:01:09 compute-1 nova_compute[228683]: 2025-11-25 10:01:09.128 228687 DEBUG oslo_concurrency.lockutils [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:01:09 compute-1 nova_compute[228683]: 2025-11-25 10:01:09.128 228687 DEBUG oslo_concurrency.lockutils [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:01:09 compute-1 nova_compute[228683]: 2025-11-25 10:01:09.128 228687 DEBUG nova.network.neutron [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Refreshing network info cache for port 521cf1b3-0c01-4af0-8577-970d4c4bf811 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 10:01:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:09.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:09 compute-1 ceph-mon[79643]: pgmap v898: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 17 KiB/s wr, 1 op/s
Nov 25 10:01:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:09.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:09 compute-1 nova_compute[228683]: 2025-11-25 10:01:09.938 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1682084571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:11.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.152 228687 DEBUG nova.network.neutron [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Updated VIF entry in instance network info cache for port 521cf1b3-0c01-4af0-8577-970d4c4bf811. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.153 228687 DEBUG nova.network.neutron [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Updating instance_info_cache with network_info: [{"id": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "address": "fa:16:3e:5f:8a:bd", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521cf1b3-0c", "ovs_interfaceid": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.173 228687 DEBUG oslo_concurrency.lockutils [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.174 228687 DEBUG nova.compute.manager [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received event network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.174 228687 DEBUG oslo_concurrency.lockutils [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.174 228687 DEBUG oslo_concurrency.lockutils [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.174 228687 DEBUG oslo_concurrency.lockutils [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.174 228687 DEBUG nova.compute.manager [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] No waiting events found dispatching network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.174 228687 WARNING nova.compute.manager [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received unexpected event network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 for instance with vm_state active and task_state None.
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.174 228687 DEBUG nova.compute.manager [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received event network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.175 228687 DEBUG oslo_concurrency.lockutils [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.175 228687 DEBUG oslo_concurrency.lockutils [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.175 228687 DEBUG oslo_concurrency.lockutils [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.175 228687 DEBUG nova.compute.manager [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] No waiting events found dispatching network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.175 228687 WARNING nova.compute.manager [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received unexpected event network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 for instance with vm_state active and task_state None.
Nov 25 10:01:11 compute-1 ceph-mon[79643]: pgmap v899: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 17 KiB/s wr, 1 op/s
Nov 25 10:01:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:11.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.838 228687 DEBUG oslo_concurrency.lockutils [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "60f54767-63c6-411b-9e17-ab15032acf8f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.838 228687 DEBUG oslo_concurrency.lockutils [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.838 228687 DEBUG oslo_concurrency.lockutils [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.839 228687 DEBUG oslo_concurrency.lockutils [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.839 228687 DEBUG oslo_concurrency.lockutils [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.840 228687 INFO nova.compute.manager [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Terminating instance
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.841 228687 DEBUG nova.compute.manager [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 10:01:11 compute-1 kernel: tap521cf1b3-0c (unregistering): left promiscuous mode
Nov 25 10:01:11 compute-1 NetworkManager[48856]: <info>  [1764064871.8759] device (tap521cf1b3-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 10:01:11 compute-1 ovn_controller[133620]: 2025-11-25T10:01:11Z|00065|binding|INFO|Releasing lport 521cf1b3-0c01-4af0-8577-970d4c4bf811 from this chassis (sb_readonly=0)
Nov 25 10:01:11 compute-1 ovn_controller[133620]: 2025-11-25T10:01:11Z|00066|binding|INFO|Setting lport 521cf1b3-0c01-4af0-8577-970d4c4bf811 down in Southbound
Nov 25 10:01:11 compute-1 ovn_controller[133620]: 2025-11-25T10:01:11Z|00067|binding|INFO|Removing iface tap521cf1b3-0c ovn-installed in OVS
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.880 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.882 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:11 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:11.884 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:8a:bd 10.100.0.5'], port_security=['fa:16:3e:5f:8a:bd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '60f54767-63c6-411b-9e17-ab15032acf8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '5ef8d5af-a9c0-4daa-a483-9d737f626c12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6121cf32-17ed-44cd-a0b1-25d4c69fcad0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>], logical_port=521cf1b3-0c01-4af0-8577-970d4c4bf811) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf5570b850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:01:11 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:11.885 142940 INFO neutron.agent.ovn.metadata.agent [-] Port 521cf1b3-0c01-4af0-8577-970d4c4bf811 in datapath 3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc unbound from our chassis
Nov 25 10:01:11 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:11.886 142940 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 10:01:11 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:11.889 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[858b868b-a855-477a-9c77-364039b546c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:01:11 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:11.889 142940 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc namespace which is not needed anymore
Nov 25 10:01:11 compute-1 nova_compute[228683]: 2025-11-25 10:01:11.909 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:11 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 25 10:01:11 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Consumed 11.502s CPU time.
Nov 25 10:01:11 compute-1 systemd-machined[192680]: Machine qemu-4-instance-0000000b terminated.
Nov 25 10:01:11 compute-1 podman[237129]: 2025-11-25 10:01:11.951150672 +0000 UTC m=+0.053209915 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 10:01:11 compute-1 neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc[236951]: [NOTICE]   (236955) : haproxy version is 2.8.14-c23fe91
Nov 25 10:01:11 compute-1 neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc[236951]: [NOTICE]   (236955) : path to executable is /usr/sbin/haproxy
Nov 25 10:01:11 compute-1 neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc[236951]: [ALERT]    (236955) : Current worker (236957) exited with code 143 (Terminated)
Nov 25 10:01:11 compute-1 neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc[236951]: [WARNING]  (236955) : All workers exited. Exiting... (0)
Nov 25 10:01:11 compute-1 systemd[1]: libpod-304dc111a7ad42ad55059da5fa788fc2fc3e2e56e869ad155f4f57107993234d.scope: Deactivated successfully.
Nov 25 10:01:11 compute-1 podman[237164]: 2025-11-25 10:01:11.991345563 +0000 UTC m=+0.035298731 container died 304dc111a7ad42ad55059da5fa788fc2fc3e2e56e869ad155f4f57107993234d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 10:01:12 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-304dc111a7ad42ad55059da5fa788fc2fc3e2e56e869ad155f4f57107993234d-userdata-shm.mount: Deactivated successfully.
Nov 25 10:01:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-6dae0c866ac2c06f6ca318e66e7f7a6657569f89ef9fbe15160807790e79ead4-merged.mount: Deactivated successfully.
Nov 25 10:01:12 compute-1 podman[237164]: 2025-11-25 10:01:12.014126375 +0000 UTC m=+0.058079544 container cleanup 304dc111a7ad42ad55059da5fa788fc2fc3e2e56e869ad155f4f57107993234d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:01:12 compute-1 systemd[1]: libpod-conmon-304dc111a7ad42ad55059da5fa788fc2fc3e2e56e869ad155f4f57107993234d.scope: Deactivated successfully.
Nov 25 10:01:12 compute-1 podman[237189]: 2025-11-25 10:01:12.059877498 +0000 UTC m=+0.030122603 container remove 304dc111a7ad42ad55059da5fa788fc2fc3e2e56e869ad155f4f57107993234d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 10:01:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:12.064 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[8aee2bfa-2f3c-4b30-893a-7a4a7763a049]: (4, ('Tue Nov 25 10:01:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc (304dc111a7ad42ad55059da5fa788fc2fc3e2e56e869ad155f4f57107993234d)\n304dc111a7ad42ad55059da5fa788fc2fc3e2e56e869ad155f4f57107993234d\nTue Nov 25 10:01:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc (304dc111a7ad42ad55059da5fa788fc2fc3e2e56e869ad155f4f57107993234d)\n304dc111a7ad42ad55059da5fa788fc2fc3e2e56e869ad155f4f57107993234d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.066 228687 INFO nova.virt.libvirt.driver [-] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Instance destroyed successfully.
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.067 228687 DEBUG nova.objects.instance [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'resources' on Instance uuid 60f54767-63c6-411b-9e17-ab15032acf8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:01:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:12.066 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5bed2d-790b-4b9b-80fc-0a65d9529673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:01:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:12.068 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ce4f6a0-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:01:12 compute-1 kernel: tap3ce4f6a0-b0: left promiscuous mode
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.070 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.077 228687 DEBUG nova.virt.libvirt.vif [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T10:00:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-837733532',display_name='tempest-TestNetworkBasicOps-server-837733532',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-837733532',id=11,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNiukP4wxZiApB9x/2kRtw/PJfNYSOLoES4vph0I/movDdqr9yG1icdy3J2mdNql0MNqHkYUhtUAIh5xPJgMgrHhQZvTHh+6yC7gk4N+GIh9otUDHxqbeYdj0RE6MUSzBw==',key_name='tempest-TestNetworkBasicOps-1440092039',keypairs=<?>,launch_index=0,launched_at=2025-11-25T10:00:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-jo5ud5jr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T10:00:30Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=60f54767-63c6-411b-9e17-ab15032acf8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "address": "fa:16:3e:5f:8a:bd", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521cf1b3-0c", "ovs_interfaceid": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.077 228687 DEBUG nova.network.os_vif_util [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "address": "fa:16:3e:5f:8a:bd", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521cf1b3-0c", "ovs_interfaceid": "521cf1b3-0c01-4af0-8577-970d4c4bf811", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.078 228687 DEBUG nova.network.os_vif_util [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5f:8a:bd,bridge_name='br-int',has_traffic_filtering=True,id=521cf1b3-0c01-4af0-8577-970d4c4bf811,network=Network(3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521cf1b3-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.078 228687 DEBUG os_vif [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:8a:bd,bridge_name='br-int',has_traffic_filtering=True,id=521cf1b3-0c01-4af0-8577-970d4c4bf811,network=Network(3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521cf1b3-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.080 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.080 228687 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap521cf1b3-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.081 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.083 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.086 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.087 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.091 228687 INFO os_vif [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:8a:bd,bridge_name='br-int',has_traffic_filtering=True,id=521cf1b3-0c01-4af0-8577-970d4c4bf811,network=Network(3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521cf1b3-0c')
Nov 25 10:01:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:12.090 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[4b620bc3-a968-4cb4-b261-22fc05a6e88a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:01:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:12.099 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[443e378a-6ea8-4193-85bb-e05e3eca6f58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:01:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:12.100 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[e42af222-f677-4d8e-9239-07494f2fa811]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:01:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:12.114 231684 DEBUG oslo.privsep.daemon [-] privsep: reply[f02d75b1-e2c2-4743-b932-f5626a795e58]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 361401, 'reachable_time': 20075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237226, 'error': None, 'target': 'ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:01:12 compute-1 systemd[1]: run-netns-ovnmeta\x2d3ce4f6a0\x2dbac0\x2d4c27\x2d8d9f\x2dea178a5a08dc.mount: Deactivated successfully.
Nov 25 10:01:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:12.117 143047 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 10:01:12 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:12.119 143047 DEBUG oslo.privsep.daemon [-] privsep: reply[d1bba527-8fa7-4606-9c7c-63917aaced76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:01:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.246 228687 INFO nova.virt.libvirt.driver [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Deleting instance files /var/lib/nova/instances/60f54767-63c6-411b-9e17-ab15032acf8f_del
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.246 228687 INFO nova.virt.libvirt.driver [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Deletion of /var/lib/nova/instances/60f54767-63c6-411b-9e17-ab15032acf8f_del complete
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.287 228687 INFO nova.compute.manager [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Took 0.45 seconds to destroy the instance on the hypervisor.
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.287 228687 DEBUG oslo.service.loopingcall [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.288 228687 DEBUG nova.compute.manager [-] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 10:01:12 compute-1 nova_compute[228683]: 2025-11-25 10:01:12.288 228687 DEBUG nova.network.neutron [-] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 10:01:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:13.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:13 compute-1 nova_compute[228683]: 2025-11-25 10:01:13.324 228687 DEBUG nova.compute.manager [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received event network-changed-521cf1b3-0c01-4af0-8577-970d4c4bf811 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:01:13 compute-1 nova_compute[228683]: 2025-11-25 10:01:13.324 228687 DEBUG nova.compute.manager [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Refreshing instance network info cache due to event network-changed-521cf1b3-0c01-4af0-8577-970d4c4bf811. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 10:01:13 compute-1 nova_compute[228683]: 2025-11-25 10:01:13.324 228687 DEBUG oslo_concurrency.lockutils [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:01:13 compute-1 nova_compute[228683]: 2025-11-25 10:01:13.325 228687 DEBUG oslo_concurrency.lockutils [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:01:13 compute-1 nova_compute[228683]: 2025-11-25 10:01:13.325 228687 DEBUG nova.network.neutron [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Refreshing network info cache for port 521cf1b3-0c01-4af0-8577-970d4c4bf811 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 10:01:13 compute-1 ceph-mon[79643]: pgmap v900: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 20 KiB/s wr, 29 op/s
Nov 25 10:01:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:13.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.583 228687 INFO nova.network.neutron [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Port 521cf1b3-0c01-4af0-8577-970d4c4bf811 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.583 228687 DEBUG nova.network.neutron [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.598 228687 DEBUG oslo_concurrency.lockutils [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-60f54767-63c6-411b-9e17-ab15032acf8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.598 228687 DEBUG nova.compute.manager [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received event network-vif-unplugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.598 228687 DEBUG oslo_concurrency.lockutils [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.599 228687 DEBUG oslo_concurrency.lockutils [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.599 228687 DEBUG oslo_concurrency.lockutils [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.599 228687 DEBUG nova.compute.manager [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] No waiting events found dispatching network-vif-unplugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.599 228687 DEBUG nova.compute.manager [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received event network-vif-unplugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.599 228687 DEBUG nova.compute.manager [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received event network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.599 228687 DEBUG oslo_concurrency.lockutils [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.600 228687 DEBUG oslo_concurrency.lockutils [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.600 228687 DEBUG oslo_concurrency.lockutils [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.600 228687 DEBUG nova.compute.manager [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] No waiting events found dispatching network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.600 228687 WARNING nova.compute.manager [req-90dfd63c-58ec-4925-895b-b57f94954d5a req-ecef0281-8c4c-4694-ae31-58cc243d57f3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received unexpected event network-vif-plugged-521cf1b3-0c01-4af0-8577-970d4c4bf811 for instance with vm_state active and task_state deleting.
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.630 228687 DEBUG nova.network.neutron [-] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.641 228687 INFO nova.compute.manager [-] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Took 2.35 seconds to deallocate network for instance.
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.698 228687 DEBUG oslo_concurrency.lockutils [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.699 228687 DEBUG oslo_concurrency.lockutils [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.742 228687 DEBUG oslo_concurrency.processutils [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:01:14 compute-1 nova_compute[228683]: 2025-11-25 10:01:14.940 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:15 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:01:15 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2852511518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:15 compute-1 nova_compute[228683]: 2025-11-25 10:01:15.082 228687 DEBUG oslo_concurrency.processutils [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:01:15 compute-1 nova_compute[228683]: 2025-11-25 10:01:15.086 228687 DEBUG nova.compute.provider_tree [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:01:15 compute-1 nova_compute[228683]: 2025-11-25 10:01:15.103 228687 DEBUG nova.scheduler.client.report [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:01:15 compute-1 nova_compute[228683]: 2025-11-25 10:01:15.120 228687 DEBUG oslo_concurrency.lockutils [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:01:15 compute-1 nova_compute[228683]: 2025-11-25 10:01:15.141 228687 INFO nova.scheduler.client.report [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Deleted allocations for instance 60f54767-63c6-411b-9e17-ab15032acf8f
Nov 25 10:01:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:15.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:15 compute-1 nova_compute[228683]: 2025-11-25 10:01:15.184 228687 DEBUG oslo_concurrency.lockutils [None req-66bb130e-a43b-4627-9dd8-e96bd9e963c5 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "60f54767-63c6-411b-9e17-ab15032acf8f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:01:15 compute-1 ceph-mon[79643]: pgmap v901: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 7.8 KiB/s wr, 28 op/s
Nov 25 10:01:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:01:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2852511518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:15 compute-1 nova_compute[228683]: 2025-11-25 10:01:15.391 228687 DEBUG nova.compute.manager [req-b3e8fa4f-b78f-43be-9dfb-05a0d21a584e req-dce95c4e-f6f2-4d7f-b6bc-99a58aceffd0 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Received event network-vif-deleted-521cf1b3-0c01-4af0-8577-970d4c4bf811 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:01:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:15.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:17 compute-1 nova_compute[228683]: 2025-11-25 10:01:17.082 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:17.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:17 compute-1 ceph-mon[79643]: pgmap v902: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 11 KiB/s wr, 57 op/s
Nov 25 10:01:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:17.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:18 compute-1 nova_compute[228683]: 2025-11-25 10:01:18.505 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:18 compute-1 nova_compute[228683]: 2025-11-25 10:01:18.590 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:19.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:19 compute-1 ceph-mon[79643]: pgmap v903: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 56 op/s
Nov 25 10:01:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:19.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:19 compute-1 nova_compute[228683]: 2025-11-25 10:01:19.941 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:21 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:21.011 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:01:21 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:21.012 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 10:01:21 compute-1 nova_compute[228683]: 2025-11-25 10:01:21.013 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:21.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:21 compute-1 ceph-mon[79643]: pgmap v904: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 56 op/s
Nov 25 10:01:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:21.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:22 compute-1 nova_compute[228683]: 2025-11-25 10:01:22.082 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:22 compute-1 nova_compute[228683]: 2025-11-25 10:01:22.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:22 compute-1 nova_compute[228683]: 2025-11-25 10:01:22.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:01:23 compute-1 sudo[237260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:01:23 compute-1 sudo[237260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:23 compute-1 sudo[237260]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:23 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:01:23.013 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ad0cdb86-b3c6-44c6-a890-1db2efa57d2b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:01:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:23.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:23 compute-1 ceph-mon[79643]: pgmap v905: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 56 op/s
Nov 25 10:01:23 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/840095353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:23 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1525438121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:23 compute-1 sudo[237286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:01:23 compute-1 sudo[237286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:23 compute-1 sudo[237286]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:23 compute-1 sudo[237311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:01:23 compute-1 sudo[237311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:23.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:23 compute-1 sudo[237311]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:24 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:01:24 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:01:24 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:01:24 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:01:24 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:01:24 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:01:24 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:01:24 compute-1 nova_compute[228683]: 2025-11-25 10:01:24.889 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:24 compute-1 nova_compute[228683]: 2025-11-25 10:01:24.943 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:25.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:25 compute-1 ceph-mon[79643]: pgmap v906: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 29 op/s
Nov 25 10:01:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:25.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:26 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2979500548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:26 compute-1 nova_compute[228683]: 2025-11-25 10:01:26.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:26 compute-1 nova_compute[228683]: 2025-11-25 10:01:26.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:26 compute-1 nova_compute[228683]: 2025-11-25 10:01:26.913 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:01:26 compute-1 nova_compute[228683]: 2025-11-25 10:01:26.913 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:01:26 compute-1 nova_compute[228683]: 2025-11-25 10:01:26.914 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:01:26 compute-1 nova_compute[228683]: 2025-11-25 10:01:26.914 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:01:26 compute-1 nova_compute[228683]: 2025-11-25 10:01:26.914 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.065 228687 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764064872.064404, 60f54767-63c6-411b-9e17-ab15032acf8f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.066 228687 INFO nova.compute.manager [-] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] VM Stopped (Lifecycle Event)
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.081 228687 DEBUG nova.compute.manager [None req-ebf3176d-c8e6-4d42-9e2e-2f4b56b5ae73 - - - - - -] [instance: 60f54767-63c6-411b-9e17-ab15032acf8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.083 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:27.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:01:27 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2689774546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.254 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:01:27 compute-1 sudo[237389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:01:27 compute-1 sudo[237389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:27 compute-1 sudo[237389]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:27 compute-1 ceph-mon[79643]: pgmap v907: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 29 op/s
Nov 25 10:01:27 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1338305608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:27 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:01:27 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:01:27 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2689774546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.456 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.456 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4928MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.457 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.457 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.499 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.499 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.514 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:01:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:01:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:27.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.850 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.854 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.869 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.884 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:01:27 compute-1 nova_compute[228683]: 2025-11-25 10:01:27.885 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:01:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/589932064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:28 compute-1 podman[237437]: 2025-11-25 10:01:28.789886706 +0000 UTC m=+0.039732459 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 10:01:28 compute-1 nova_compute[228683]: 2025-11-25 10:01:28.885 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:28 compute-1 nova_compute[228683]: 2025-11-25 10:01:28.885 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:01:28 compute-1 nova_compute[228683]: 2025-11-25 10:01:28.886 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:01:28 compute-1 nova_compute[228683]: 2025-11-25 10:01:28.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:01:28 compute-1 nova_compute[228683]: 2025-11-25 10:01:28.895 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:28 compute-1 nova_compute[228683]: 2025-11-25 10:01:28.895 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:29.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:29 compute-1 ceph-mon[79643]: pgmap v908: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 518 B/s rd, 0 op/s
Nov 25 10:01:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:29.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:29 compute-1 nova_compute[228683]: 2025-11-25 10:01:29.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:29 compute-1 nova_compute[228683]: 2025-11-25 10:01:29.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:29 compute-1 nova_compute[228683]: 2025-11-25 10:01:29.946 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:01:30 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2805373666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:31.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:31 compute-1 ceph-mon[79643]: pgmap v909: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 518 B/s rd, 0 op/s
Nov 25 10:01:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:31.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:32 compute-1 nova_compute[228683]: 2025-11-25 10:01:32.084 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:01:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:33.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:01:33 compute-1 ceph-mon[79643]: pgmap v910: 337 pgs: 337 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 10:01:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:33.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:34 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/295942382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 10:01:34 compute-1 nova_compute[228683]: 2025-11-25 10:01:34.947 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:01:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:35.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:01:35 compute-1 ceph-mon[79643]: pgmap v911: 337 pgs: 337 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 10:01:35 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1407742521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 10:01:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:35.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:37 compute-1 nova_compute[228683]: 2025-11-25 10:01:37.085 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:37.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:37 compute-1 ceph-mon[79643]: pgmap v912: 337 pgs: 337 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 10:01:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:37.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:38 compute-1 podman[237458]: 2025-11-25 10:01:38.81070901 +0000 UTC m=+0.058559821 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:01:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:01:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:39.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:01:39 compute-1 ceph-mon[79643]: pgmap v913: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 10:01:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:39.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:39 compute-1 nova_compute[228683]: 2025-11-25 10:01:39.948 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:41.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:41 compute-1 ceph-mon[79643]: pgmap v914: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 10:01:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:41.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:42 compute-1 nova_compute[228683]: 2025-11-25 10:01:42.086 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:42 compute-1 podman[237484]: 2025-11-25 10:01:42.786985291 +0000 UTC m=+0.041072164 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 10:01:43 compute-1 sudo[237501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:01:43 compute-1 sudo[237501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:43 compute-1 sudo[237501]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:43.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:43 compute-1 ceph-mon[79643]: pgmap v915: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 10:01:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:43.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:44 compute-1 nova_compute[228683]: 2025-11-25 10:01:44.949 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:45.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:45 compute-1 ceph-mon[79643]: pgmap v916: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 10:01:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:01:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:01:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:45.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:01:47 compute-1 nova_compute[228683]: 2025-11-25 10:01:47.087 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:47.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:47 compute-1 ceph-mon[79643]: pgmap v917: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 10:01:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:47.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:49.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:49 compute-1 ceph-mon[79643]: pgmap v918: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 10:01:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:49.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:49 compute-1 nova_compute[228683]: 2025-11-25 10:01:49.951 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:01:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:51.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:01:51 compute-1 ceph-mon[79643]: pgmap v919: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Nov 25 10:01:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:51.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:52 compute-1 nova_compute[228683]: 2025-11-25 10:01:52.090 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:52 compute-1 ovn_controller[133620]: 2025-11-25T10:01:52Z|00068|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Nov 25 10:01:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:53.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:53 compute-1 ceph-mon[79643]: pgmap v920: 337 pgs: 337 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Nov 25 10:01:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:53.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 10:01:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/816471803' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:01:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 10:01:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/816471803' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:01:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/816471803' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:01:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/816471803' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:01:54 compute-1 nova_compute[228683]: 2025-11-25 10:01:54.953 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:01:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:55.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:01:55 compute-1 ceph-mon[79643]: pgmap v921: 337 pgs: 337 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 10:01:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:55.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:57 compute-1 nova_compute[228683]: 2025-11-25 10:01:57.093 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:57.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:57 compute-1 ceph-mon[79643]: pgmap v922: 337 pgs: 337 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 10:01:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:57.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:59.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:59 compute-1 ceph-mon[79643]: pgmap v923: 337 pgs: 337 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 10:01:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:01:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:59.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:59 compute-1 podman[237536]: 2025-11-25 10:01:59.779887486 +0000 UTC m=+0.035428846 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 10:01:59 compute-1 nova_compute[228683]: 2025-11-25 10:01:59.954 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:02:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:01.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:01 compute-1 ceph-mon[79643]: pgmap v924: 337 pgs: 337 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 267 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 10:02:01 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1795209543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:01.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:02 compute-1 nova_compute[228683]: 2025-11-25 10:02:02.096 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:03 compute-1 sudo[237554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:02:03 compute-1 sudo[237554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:03 compute-1 sudo[237554]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:03.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:03 compute-1 ceph-mon[79643]: pgmap v925: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 10:02:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:03.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:04 compute-1 nova_compute[228683]: 2025-11-25 10:02:04.956 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:02:05.005 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:02:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:02:05.006 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:02:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:02:05.006 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:02:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:05.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:05 compute-1 ceph-mon[79643]: pgmap v926: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Nov 25 10:02:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:05.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:07 compute-1 nova_compute[228683]: 2025-11-25 10:02:07.099 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:07.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:07 compute-1 ceph-mon[79643]: pgmap v927: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Nov 25 10:02:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:07.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:09.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:09 compute-1 ceph-mon[79643]: pgmap v928: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 17 KiB/s wr, 29 op/s
Nov 25 10:02:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:09.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:09 compute-1 podman[237583]: 2025-11-25 10:02:09.794698649 +0000 UTC m=+0.050124168 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:02:09 compute-1 nova_compute[228683]: 2025-11-25 10:02:09.958 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.623732) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930623750, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2352, "num_deletes": 251, "total_data_size": 6059147, "memory_usage": 6159992, "flush_reason": "Manual Compaction"}
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930632481, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3937847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26128, "largest_seqno": 28475, "table_properties": {"data_size": 3928612, "index_size": 5729, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19664, "raw_average_key_size": 20, "raw_value_size": 3909867, "raw_average_value_size": 4030, "num_data_blocks": 252, "num_entries": 970, "num_filter_entries": 970, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064722, "oldest_key_time": 1764064722, "file_creation_time": 1764064930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 8772 microseconds, and 5505 cpu microseconds.
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.632504) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3937847 bytes OK
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.632514) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.632798) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.632807) EVENT_LOG_v1 {"time_micros": 1764064930632805, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.632816) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 6048696, prev total WAL file size 6048696, number of live WAL files 2.
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.633614) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3845KB)], [51(11MB)]
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930633639, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 16019517, "oldest_snapshot_seqno": -1}
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5854 keys, 13906297 bytes, temperature: kUnknown
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930661404, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 13906297, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13867015, "index_size": 23556, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 148629, "raw_average_key_size": 25, "raw_value_size": 13761306, "raw_average_value_size": 2350, "num_data_blocks": 959, "num_entries": 5854, "num_filter_entries": 5854, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764064930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.661569) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 13906297 bytes
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.661926) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 575.6 rd, 499.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.5 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 6371, records dropped: 517 output_compression: NoCompression
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.661940) EVENT_LOG_v1 {"time_micros": 1764064930661934, "job": 30, "event": "compaction_finished", "compaction_time_micros": 27830, "compaction_time_cpu_micros": 19697, "output_level": 6, "num_output_files": 1, "total_output_size": 13906297, "num_input_records": 6371, "num_output_records": 5854, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930662527, "job": 30, "event": "table_file_deletion", "file_number": 53}
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930663963, "job": 30, "event": "table_file_deletion", "file_number": 51}
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.633564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.664023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.664027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.664028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.664029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:02:10 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:02:10.664030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:02:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:11.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:11 compute-1 ceph-mon[79643]: pgmap v929: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Nov 25 10:02:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:11.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:12 compute-1 nova_compute[228683]: 2025-11-25 10:02:12.100 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:13.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:13 compute-1 ceph-mon[79643]: pgmap v930: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Nov 25 10:02:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:13.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:13 compute-1 podman[237609]: 2025-11-25 10:02:13.777793803 +0000 UTC m=+0.033398258 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:02:14 compute-1 nova_compute[228683]: 2025-11-25 10:02:14.957 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:15.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:15 compute-1 ceph-mon[79643]: pgmap v931: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:02:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:15.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:17 compute-1 nova_compute[228683]: 2025-11-25 10:02:17.103 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:17.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:17 compute-1 ceph-mon[79643]: pgmap v932: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:17.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:19.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:19.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:19 compute-1 ceph-mon[79643]: pgmap v933: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:19 compute-1 nova_compute[228683]: 2025-11-25 10:02:19.959 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:21.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:21.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:21 compute-1 ceph-mon[79643]: pgmap v934: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:22 compute-1 nova_compute[228683]: 2025-11-25 10:02:22.106 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:23 compute-1 sudo[237630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:02:23 compute-1 sudo[237630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:23 compute-1 sudo[237630]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:23.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:23.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:23 compute-1 ceph-mon[79643]: pgmap v935: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:24 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3580303334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:24 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/364596767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:24 compute-1 nova_compute[228683]: 2025-11-25 10:02:24.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:24 compute-1 nova_compute[228683]: 2025-11-25 10:02:24.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:02:24 compute-1 nova_compute[228683]: 2025-11-25 10:02:24.962 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:25.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:25.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:25 compute-1 ceph-mon[79643]: pgmap v936: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:26 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:02:26 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1741411573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:26 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3681619112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:26 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1741411573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:26 compute-1 nova_compute[228683]: 2025-11-25 10:02:26.890 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:26 compute-1 nova_compute[228683]: 2025-11-25 10:02:26.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:26 compute-1 nova_compute[228683]: 2025-11-25 10:02:26.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:26 compute-1 nova_compute[228683]: 2025-11-25 10:02:26.912 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:02:26 compute-1 nova_compute[228683]: 2025-11-25 10:02:26.913 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:02:26 compute-1 nova_compute[228683]: 2025-11-25 10:02:26.913 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:02:26 compute-1 nova_compute[228683]: 2025-11-25 10:02:26.913 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:02:26 compute-1 nova_compute[228683]: 2025-11-25 10:02:26.913 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:02:27 compute-1 nova_compute[228683]: 2025-11-25 10:02:27.108 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:02:27 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/546263598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:27 compute-1 nova_compute[228683]: 2025-11-25 10:02:27.246 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:02:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:27.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:27 compute-1 sudo[237680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:02:27 compute-1 nova_compute[228683]: 2025-11-25 10:02:27.455 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:02:27 compute-1 sudo[237680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:27 compute-1 nova_compute[228683]: 2025-11-25 10:02:27.457 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4928MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:02:27 compute-1 nova_compute[228683]: 2025-11-25 10:02:27.457 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:02:27 compute-1 nova_compute[228683]: 2025-11-25 10:02:27.458 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:02:27 compute-1 sudo[237680]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:27 compute-1 sudo[237705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:02:27 compute-1 sudo[237705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:27 compute-1 nova_compute[228683]: 2025-11-25 10:02:27.503 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:02:27 compute-1 nova_compute[228683]: 2025-11-25 10:02:27.503 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:02:27 compute-1 nova_compute[228683]: 2025-11-25 10:02:27.526 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:02:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:27.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:27 compute-1 ceph-mon[79643]: pgmap v937: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:27 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/546263598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:02:27 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1877182358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:27 compute-1 nova_compute[228683]: 2025-11-25 10:02:27.879 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:02:27 compute-1 nova_compute[228683]: 2025-11-25 10:02:27.884 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:02:27 compute-1 sudo[237705]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:27 compute-1 nova_compute[228683]: 2025-11-25 10:02:27.895 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:02:27 compute-1 nova_compute[228683]: 2025-11-25 10:02:27.897 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:02:27 compute-1 nova_compute[228683]: 2025-11-25 10:02:27.897 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:02:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1877182358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:02:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:02:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:02:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:02:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:02:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:02:28 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:02:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:02:28 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 3198 syncs, 3.41 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3881 writes, 13K keys, 3881 commit groups, 1.0 writes per commit group, ingest: 16.69 MB, 0.03 MB/s
                                           Interval WAL: 3881 writes, 1691 syncs, 2.30 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 10:02:28 compute-1 nova_compute[228683]: 2025-11-25 10:02:28.897 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:28 compute-1 nova_compute[228683]: 2025-11-25 10:02:28.897 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:02:28 compute-1 nova_compute[228683]: 2025-11-25 10:02:28.898 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:02:28 compute-1 nova_compute[228683]: 2025-11-25 10:02:28.916 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:02:28 compute-1 nova_compute[228683]: 2025-11-25 10:02:28.916 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:29.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:29 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 10:02:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:29.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:29 compute-1 ceph-mon[79643]: pgmap v938: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 25 10:02:29 compute-1 nova_compute[228683]: 2025-11-25 10:02:29.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:29 compute-1 nova_compute[228683]: 2025-11-25 10:02:29.964 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:02:30 compute-1 podman[237783]: 2025-11-25 10:02:30.780910414 +0000 UTC m=+0.034913045 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 10:02:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:31.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:31.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:31 compute-1 ceph-mon[79643]: pgmap v939: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 768 B/s rd, 0 op/s
Nov 25 10:02:31 compute-1 nova_compute[228683]: 2025-11-25 10:02:31.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:31 compute-1 nova_compute[228683]: 2025-11-25 10:02:31.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:32 compute-1 sudo[237800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:02:32 compute-1 sudo[237800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:32 compute-1 sudo[237800]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:32 compute-1 nova_compute[228683]: 2025-11-25 10:02:32.111 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:32 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:02:32 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:02:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:33.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:33.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:33 compute-1 ceph-mon[79643]: pgmap v940: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1 KiB/s rd, 1 op/s
Nov 25 10:02:34 compute-1 nova_compute[228683]: 2025-11-25 10:02:34.889 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:34 compute-1 nova_compute[228683]: 2025-11-25 10:02:34.964 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:35 compute-1 ceph-mon[79643]: pgmap v941: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 768 B/s rd, 0 op/s
Nov 25 10:02:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:35.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:35.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:37 compute-1 ceph-mon[79643]: pgmap v942: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 768 B/s rd, 0 op/s
Nov 25 10:02:37 compute-1 nova_compute[228683]: 2025-11-25 10:02:37.114 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:37.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:37.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:39 compute-1 ceph-mon[79643]: pgmap v943: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1 KiB/s rd, 1 op/s
Nov 25 10:02:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:39.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:39.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:39 compute-1 nova_compute[228683]: 2025-11-25 10:02:39.965 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:40 compute-1 podman[237829]: 2025-11-25 10:02:40.796235927 +0000 UTC m=+0.051691833 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller)
Nov 25 10:02:41 compute-1 ceph-mon[79643]: pgmap v944: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:41.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:41.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:42 compute-1 nova_compute[228683]: 2025-11-25 10:02:42.116 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:43 compute-1 ceph-mon[79643]: pgmap v945: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:43 compute-1 sudo[237853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:02:43 compute-1 sudo[237853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:43 compute-1 sudo[237853]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:43.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:43.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:44 compute-1 podman[237879]: 2025-11-25 10:02:44.784907745 +0000 UTC m=+0.039342624 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 10:02:44 compute-1 nova_compute[228683]: 2025-11-25 10:02:44.967 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:45 compute-1 ceph-mon[79643]: pgmap v946: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:02:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:45.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:45 compute-1 sshd-session[237897]: Accepted publickey for zuul from 192.168.122.10 port 56116 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 10:02:45 compute-1 systemd-logind[746]: New session 53 of user zuul.
Nov 25 10:02:45 compute-1 systemd[1]: Started Session 53 of User zuul.
Nov 25 10:02:45 compute-1 sshd-session[237897]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:02:45 compute-1 sudo[237901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 25 10:02:45 compute-1 sudo[237901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:02:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:45.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:47 compute-1 ceph-mon[79643]: pgmap v947: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:47 compute-1 nova_compute[228683]: 2025-11-25 10:02:47.119 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:47.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:47.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:48 compute-1 ceph-mon[79643]: from='client.26600 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:48 compute-1 ceph-mon[79643]: from='client.26569 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:48 compute-1 ceph-mon[79643]: from='client.16992 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:48 compute-1 ceph-mon[79643]: from='client.26609 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:48 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 25 10:02:48 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2037614428' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:02:49 compute-1 ceph-mon[79643]: from='client.16998 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:49 compute-1 ceph-mon[79643]: pgmap v948: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:49 compute-1 ceph-mon[79643]: from='client.17004 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:49 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2037614428' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:02:49 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/4229107572' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:02:49 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1586734496' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:02:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:49.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:49.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:49 compute-1 nova_compute[228683]: 2025-11-25 10:02:49.968 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:51 compute-1 ceph-mon[79643]: pgmap v949: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:51.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:51.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:52 compute-1 nova_compute[228683]: 2025-11-25 10:02:52.123 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:53 compute-1 ceph-mon[79643]: pgmap v950: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:53.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:53.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:54 compute-1 nova_compute[228683]: 2025-11-25 10:02:54.969 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:54 compute-1 ovs-vsctl[238247]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 10:02:55 compute-1 ceph-mon[79643]: pgmap v951: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:55 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/3803642284' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:02:55 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/3803642284' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:02:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:55.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:55 compute-1 virtqemud[228099]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 10:02:55 compute-1 virtqemud[228099]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 10:02:55 compute-1 virtqemud[228099]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 10:02:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:55.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:56 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: cache status {prefix=cache status} (starting...)
Nov 25 10:02:56 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:02:56 compute-1 lvm[238564]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:02:56 compute-1 lvm[238564]: VG ceph_vg0 finished
Nov 25 10:02:56 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: client ls {prefix=client ls} (starting...)
Nov 25 10:02:56 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:02:56 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: damage ls {prefix=damage ls} (starting...)
Nov 25 10:02:56 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:02:56 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: dump loads {prefix=dump loads} (starting...)
Nov 25 10:02:56 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:02:56 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 25 10:02:56 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/268889035' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:02:56 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 25 10:02:56 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:02:57 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:02:57 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5474 writes, 28K keys, 5474 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.04 MB/s
                                           Cumulative WAL: 5474 writes, 5474 syncs, 1.00 writes per sync, written: 0.07 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1492 writes, 7261 keys, 1492 commit groups, 1.0 writes per commit group, ingest: 16.63 MB, 0.03 MB/s
                                           Interval WAL: 1492 writes, 1492 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    403.3      0.11              0.07        15    0.007       0      0       0.0       0.0
                                             L6      1/0   13.26 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    492.3    420.7      0.41              0.27        14    0.030     72K   7350       0.0       0.0
                                            Sum      1/0   13.26 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1    391.7    417.2      0.52              0.33        29    0.018     72K   7350       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.0    404.0    411.6      0.18              0.12        10    0.018     30K   2534       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    492.3    420.7      0.41              0.27        14    0.030     72K   7350       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    406.5      0.11              0.07        14    0.008       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.042, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.21 GB write, 0.12 MB/s write, 0.20 GB read, 0.11 MB/s read, 0.5 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633d9fc7350#2 capacity: 304.00 MB usage: 18.29 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 8.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(938,17.72 MB,5.8286%) FilterBlock(29,214.36 KB,0.0688603%) IndexBlock(29,373.06 KB,0.119842%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 10:02:57 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 25 10:02:57 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:02:57 compute-1 nova_compute[228683]: 2025-11-25 10:02:57.124 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:57 compute-1 ceph-mon[79643]: pgmap v952: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:57 compute-1 ceph-mon[79643]: from='client.26648 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:57 compute-1 ceph-mon[79643]: from='client.26623 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:57 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/268889035' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:02:57 compute-1 ceph-mon[79643]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:02:57 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2345564020' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:02:57 compute-1 ceph-mon[79643]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:02:57 compute-1 ceph-mon[79643]: from='client.26663 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:02:57 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2019448717' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:02:57 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 25 10:02:57 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:02:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:57.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:57 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 25 10:02:57 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:02:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 25 10:02:57 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3117861905' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:02:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 25 10:02:57 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3077889400' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:02:57 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 25 10:02:57 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:02:57 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 25 10:02:57 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:02:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:57.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:57 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: ops {prefix=ops} (starting...)
Nov 25 10:02:57 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:02:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 25 10:02:57 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1237955398' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 25 10:02:58 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2913515115' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.26644 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2036901220' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2019448717' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.26690 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.26684 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.26680 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2452178801' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3117861905' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3077889400' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.17088 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.26720 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.26729 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/958389917' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1589163821' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1237955398' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.26722 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/408653654' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3224074830' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2913515115' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 25 10:02:58 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3661723052' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:02:58 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: session ls {prefix=session ls} (starting...)
Nov 25 10:02:58 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:02:58 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: status {prefix=status} (starting...)
Nov 25 10:02:58 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 25 10:02:58 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1910110655' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 25 10:02:59 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2868420368' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: pgmap v953: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.26743 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.26777 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.17130 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/932111885' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3661723052' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1421500948' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.26770 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.26801 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2277747532' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/763932686' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1910110655' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.26819 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2480608717' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3290073188' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/117177134' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1076590568' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2868420368' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 25 10:02:59 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3610283719' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:02:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:59.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:59 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 10:02:59 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2256568961' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:02:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:02:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:59.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:59 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 25 10:02:59 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4142727159' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:02:59 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 25 10:02:59 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2582059050' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:02:59 compute-1 nova_compute[228683]: 2025-11-25 10:02:59.970 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:00 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 25 10:03:00 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3041939863' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.26852 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/288058564' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3610283719' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2737613877' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2077170816' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2256568961' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.26885 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1977904971' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.26851 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2156290557' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/356955100' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/4142727159' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2821648642' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1558713057' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2582059050' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1945648626' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3469210907' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3041939863' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 25 10:03:00 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/736313694' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:03:00 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 25 10:03:00 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/430701055' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:03:01 compute-1 ceph-mon[79643]: pgmap v954: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:01 compute-1 ceph-mon[79643]: from='client.17253 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1131750807' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:03:01 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/736313694' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:03:01 compute-1 ceph-mon[79643]: from='client.26917 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3965772973' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:03:01 compute-1 ceph-mon[79643]: from='client.26923 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2663197137' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:03:01 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/648257444' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:03:01 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/430701055' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:03:01 compute-1 ceph-mon[79643]: from='client.26941 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-1 ceph-mon[79643]: from='client.26950 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/816679850' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:01 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/216371832' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:03:01 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4172689874' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:03:01 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2005351374' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 85 pg[6.b( v 42'42 (0'0,42'42] lb MIN local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=-1 lpr=84 pi=[61,84)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] exit Started 0.841990 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:04.912116+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:34.020558+0000 osd.0 (osd.0) 94 : cluster [DBG] 10.1c scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:34.034464+0000 osd.0 (osd.0) 95 : cluster [DBG] 10.1c scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 1220608 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 95)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:34.020558+0000 osd.0 (osd.0) 94 : cluster [DBG] 10.1c scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:34.034464+0000 osd.0 (osd.0) 95 : cluster [DBG] 10.1c scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 86 handle_osd_map epochs [85,86], i have 86, src has [1,86]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005531 2 0.000054
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006458 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005844 2 0.000070
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006678 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=6 ec=51/29 lis/c=85/51 les/c/f=86/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001435 3 0.000091
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=6 ec=51/29 lis/c=85/51 les/c/f=86/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=6 ec=51/29 lis/c=85/51 les/c/f=86/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000023 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=6 ec=51/29 lis/c=85/51 les/c/f=86/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/51 les/c/f=86/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001706 3 0.000056
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/51 les/c/f=86/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/51 les/c/f=86/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/51 les/c/f=86/52/0 sis=85) [0] r=0 lpr=85 pi=[51,85)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:05.912231+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:35.038396+0000 osd.0 (osd.0) 96 : cluster [DBG] 10.1d scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:35.048974+0000 osd.0 (osd.0) 97 : cluster [DBG] 10.1d scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 86 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 1212416 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 97)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:35.038396+0000 osd.0 (osd.0) 96 : cluster [DBG] 10.1d scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:35.048974+0000 osd.0 (osd.0) 97 : cluster [DBG] 10.1d scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:06.912349+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:36.007787+0000 osd.0 (osd.0) 98 : cluster [DBG] 12.1b scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:36.018377+0000 osd.0 (osd.0) 99 : cluster [DBG] 12.1b scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.220875740s of 10.286118507s, submitted: 67
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 86 heartbeat osd_stat(store_statfs(0x4fcaaa000/0x0/0x4ffc00000, data 0x101cba/0x16f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 1392640 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d(unlocked)] enter Initial
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=0 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000050 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=0 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000037
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000110 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000222 1 0.000206
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d(unlocked)] enter Initial
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=0 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000132 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=0 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000032
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000125 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000066 1 0.000214
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000035 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000031 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000821 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 99)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:36.007787+0000 osd.0 (osd.0) 98 : cluster [DBG] 12.1b scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:36.018377+0000 osd.0 (osd.0) 99 : cluster [DBG] 12.1b scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000181 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:07.912435+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:37.042211+0000 osd.0 (osd.0) 100 : cluster [DBG] 2.16 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:37.052635+0000 osd.0 (osd.0) 101 : cluster [DBG] 2.16 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 1376256 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 101)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:37.042211+0000 osd.0 (osd.0) 100 : cluster [DBG] 2.16 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:37.052635+0000 osd.0 (osd.0) 101 : cluster [DBG] 2.16 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.013089 2 0.000532
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.013710 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.013896 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000306 1 0.000471
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000088 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.013805 2 0.000996
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.015020 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.015167 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=0 lpr=87 pi=[70,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000030 1 0.000049
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 88 handle_osd_map epochs [88,88], i have 88, src has [1,88]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:08.912529+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:38.059117+0000 osd.0 (osd.0) 102 : cluster [DBG] 7.12 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:38.069709+0000 osd.0 (osd.0) 103 : cluster [DBG] 7.12 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 1368064 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 666918 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.1d( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.004973 6 0.000027
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.1d( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.1d( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 103)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:38.059117+0000 osd.0 (osd.0) 102 : cluster [DBG] 7.12 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:38.069709+0000 osd.0 (osd.0) 103 : cluster [DBG] 7.12 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.d( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.005258 6 0.000486
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.d( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.d( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.1d( v 42'1151 lc 35'540 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001963 3 0.000086
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.1d( v 42'1151 lc 35'540 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.1d( v 42'1151 lc 35'540 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000108 1 0.000039
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.1d( v 42'1151 lc 35'540 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:09.912619+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:39.053987+0000 osd.0 (osd.0) 104 : cluster [DBG] 2.17 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:39.064596+0000 osd.0 (osd.0) 105 : cluster [DBG] 2.17 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035662 1 0.000080
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.d( v 42'1151 lc 35'368 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.037570 3 0.000199
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.d( v 42'1151 lc 35'368 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.d( v 42'1151 lc 35'368 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000035 1 0.000045
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.d( v 42'1151 lc 35'368 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059877 1 0.000020
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1d57400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1171456 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 105)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:39.053987+0000 osd.0 (osd.0) 104 : cluster [DBG] 2.17 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:39.064596+0000 osd.0 (osd.0) 105 : cluster [DBG] 2.17 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.912722 1 0.000023
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.010354 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.015844 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000342 1 0.000481
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000106 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.973636 1 0.000028
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.011446 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.016445 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000052 1 0.000080
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001645 2 0.000037
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002052 2 0.000437
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=36
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=36
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000417 2 0.000077
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=51
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=51
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000388 2 0.000023
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:10.912718+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:40.048290+0000 osd.0 (osd.0) 106 : cluster [DBG] 10.1f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:40.062313+0000 osd.0 (osd.0) 107 : cluster [DBG] 10.1f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e(unlocked)] enter Initial
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=0 pi=[69,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000066 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=0 pi=[69,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000029
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000109 1 0.000052
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.000542 2 0.000063
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 90 pg[6.e( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 90 handle_osd_map epochs [87,90], i have 90, src has [1,90]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 1105920 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 90 handle_osd_map epochs [90,91], i have 91, src has [1,91]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003052 2 0.000058
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005169 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002955 2 0.000148
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005457 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 107)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:40.048290+0000 osd.0 (osd.0) 106 : cluster [DBG] 10.1f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:40.062313+0000 osd.0 (osd.0) 107 : cluster [DBG] 10.1f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.483330 2 0.000059
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.484028 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 lc 41'13 (0'0,42'42] local-lis/les=89/91 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/70 les/c/f=91/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001328 4 0.000095
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/70 les/c/f=91/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/70 les/c/f=91/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/70 les/c/f=91/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=6 ec=51/29 lis/c=90/70 les/c/f=91/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001372 4 0.000158
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=6 ec=51/29 lis/c=90/70 les/c/f=91/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=6 ec=51/29 lis/c=90/70 les/c/f=91/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000031 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=6 ec=51/29 lis/c=90/70 les/c/f=91/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 lc 41'13 (0'0,42'42] local-lis/les=89/91 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 lc 41'13 (0'0,42'42] local-lis/les=89/91 n=1 ec=49/17 lis/c=89/69 les/c/f=91/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.001566 3 0.000126
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 lc 41'13 (0'0,42'42] local-lis/les=89/91 n=1 ec=49/17 lis/c=89/69 les/c/f=91/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 lc 41'13 (0'0,42'42] local-lis/les=89/91 n=1 ec=49/17 lis/c=89/69 les/c/f=91/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000091 1 0.000054
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 lc 41'13 (0'0,42'42] local-lis/les=89/91 n=1 ec=49/17 lis/c=89/69 les/c/f=91/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 lc 41'13 (0'0,42'42] local-lis/les=89/91 n=1 ec=49/17 lis/c=89/69 les/c/f=91/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000026 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 lc 41'13 (0'0,42'42] local-lis/les=89/91 n=1 ec=49/17 lis/c=89/69 les/c/f=91/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:11.912813+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:41.080642+0000 osd.0 (osd.0) 108 : cluster [DBG] 12.16 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:41.091078+0000 osd.0 (osd.0) 109 : cluster [DBG] 12.16 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=89/91 n=1 ec=49/17 lis/c=89/69 les/c/f=91/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 mlcod 42'42 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007709 3 0.000075
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=89/91 n=1 ec=49/17 lis/c=89/69 les/c/f=91/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 mlcod 42'42 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=89/91 n=1 ec=49/17 lis/c=89/69 les/c/f=91/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 mlcod 42'42 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=89/91 n=1 ec=49/17 lis/c=89/69 les/c/f=91/70/0 sis=89) [0] r=0 lpr=90 pi=[69,89)/1 crt=42'42 mlcod 42'42 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 91 heartbeat osd_stat(store_statfs(0x4fca9c000/0x0/0x4ffc00000, data 0x10a31e/0x17e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1081344 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 109)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:41.080642+0000 osd.0 (osd.0) 108 : cluster [DBG] 12.16 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:41.091078+0000 osd.0 (osd.0) 109 : cluster [DBG] 12.16 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:12.912948+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:42.052385+0000 osd.0 (osd.0) 110 : cluster [DBG] 5.18 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:42.062843+0000 osd.0 (osd.0) 111 : cluster [DBG] 5.18 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1064960 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:13.913095+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 4 last_log 113 sent 111 num 4 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:43.102657+0000 osd.0 (osd.0) 112 : cluster [DBG] 4.18 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:43.112770+0000 osd.0 (osd.0) 113 : cluster [DBG] 4.18 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 111)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:42.052385+0000 osd.0 (osd.0) 110 : cluster [DBG] 5.18 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:42.062843+0000 osd.0 (osd.0) 111 : cluster [DBG] 5.18 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1056768 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 704295 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:14.913262+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 4 last_log 115 sent 113 num 4 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:44.127506+0000 osd.0 (osd.0) 114 : cluster [DBG] 8.17 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:44.138089+0000 osd.0 (osd.0) 115 : cluster [DBG] 8.17 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 113)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:43.102657+0000 osd.0 (osd.0) 112 : cluster [DBG] 4.18 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:43.112770+0000 osd.0 (osd.0) 113 : cluster [DBG] 4.18 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 115)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:44.127506+0000 osd.0 (osd.0) 114 : cluster [DBG] 8.17 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:44.138089+0000 osd.0 (osd.0) 115 : cluster [DBG] 8.17 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 1007616 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:15.913460+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:45.161082+0000 osd.0 (osd.0) 116 : cluster [DBG] 8.12 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:45.171680+0000 osd.0 (osd.0) 117 : cluster [DBG] 8.12 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 117)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:45.161082+0000 osd.0 (osd.0) 116 : cluster [DBG] 8.12 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:45.171680+0000 osd.0 (osd.0) 117 : cluster [DBG] 8.12 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 91 heartbeat osd_stat(store_statfs(0x4fca9b000/0x0/0x4ffc00000, data 0x10c2c0/0x181000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 999424 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:16.913612+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:46.141583+0000 osd.0 (osd.0) 118 : cluster [DBG] 5.1b scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:46.152070+0000 osd.0 (osd.0) 119 : cluster [DBG] 5.1b scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 119)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:46.141583+0000 osd.0 (osd.0) 118 : cluster [DBG] 5.1b scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:46.152070+0000 osd.0 (osd.0) 119 : cluster [DBG] 5.1b scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.080902100s of 10.138784409s, submitted: 66
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 991232 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 91 heartbeat osd_stat(store_statfs(0x4fca9b000/0x0/0x4ffc00000, data 0x10c2c0/0x181000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:17.913761+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:47.181032+0000 osd.0 (osd.0) 120 : cluster [DBG] 11.12 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:47.191444+0000 osd.0 (osd.0) 121 : cluster [DBG] 11.12 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f(unlocked)] enter Initial
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=0 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000063 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=0 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000038
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000065 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000164 1 0.000179
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f(unlocked)] enter Initial
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000035 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000242 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=0 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000151 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=0 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000033
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000060 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000068 1 0.000147
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000244 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000403 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 121)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:47.181032+0000 osd.0 (osd.0) 120 : cluster [DBG] 11.12 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:47.191444+0000 osd.0 (osd.0) 121 : cluster [DBG] 11.12 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=42'42 mlcod 42'42 active+clean] exit Started/Primary/Active/Clean 41.152705 89 0.000319
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=42'42 mlcod 42'42 active mbc={255={}}] exit Started/Primary/Active 41.576682 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=42'42 mlcod 42'42 active mbc={255={}}] exit Started/Primary 42.586264 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=42'42 mlcod 42'42 active mbc={255={}}] exit Started 42.586371 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=42'42 mlcod 42'42 active mbc={255={}}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92 pruub=14.425540924s) [1] r=-1 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 42'42 active pruub 273.496917725s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92 pruub=14.425458908s) [1] r=-1 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 273.496917725s@ mbc={}] exit Reset 0.000110 1 0.000438
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92 pruub=14.425458908s) [1] r=-1 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 273.496917725s@ mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92 pruub=14.425458908s) [1] r=-1 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 273.496917725s@ mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92 pruub=14.425458908s) [1] r=-1 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 273.496917725s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92 pruub=14.425458908s) [1] r=-1 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 273.496917725s@ mbc={}] exit Start 0.000045 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 92 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92 pruub=14.425458908s) [1] r=-1 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 273.496917725s@ mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 974848 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:18.913899+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:48.152453+0000 osd.0 (osd.0) 122 : cluster [DBG] 5.1c deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:48.162960+0000 osd.0 (osd.0) 123 : cluster [DBG] 5.1c deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.998837 2 0.000356
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.999305 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.999413 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000476 1 0.000545
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000015 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.000106 2 0.000085
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.000557 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.000677 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000051 1 0.000256
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 93 handle_osd_map epochs [92,93], i have 93, src has [1,93]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 93 handle_osd_map epochs [93,93], i have 93, src has [1,93]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 93 handle_osd_map epochs [93,93], i have 93, src has [1,93]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=-1 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.001245 7 0.000148
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=-1 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=-1 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 123)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:48.152453+0000 osd.0 (osd.0) 122 : cluster [DBG] 5.1c deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:48.162960+0000 osd.0 (osd.0) 123 : cluster [DBG] 5.1c deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=-1 lpr=92 pi=[61,92)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.130476 2 0.000075
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=-1 lpr=92 pi=[61,92)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.130512 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=-1 lpr=92 pi=[61,92)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=-1 lpr=92 pi=[61,92)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=-1 lpr=92 pi=[61,92)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000056 1 0.000067
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=-1 lpr=92 pi=[61,92)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] lb MIN local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=-1 lpr=92 DELETING pi=[61,92)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.023717 2 0.000119
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] lb MIN local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=-1 lpr=92 pi=[61,92)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.023801 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] lb MIN local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=-1 lpr=92 pi=[61,92)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] exit Started 1.155658 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 966656 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 714047 data_alloc: 218103808 data_used: 180224
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:19.914019+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:49.150384+0000 osd.0 (osd.0) 124 : cluster [DBG] 11.14 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:49.160995+0000 osd.0 (osd.0) 125 : cluster [DBG] 11.14 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.1f( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.005599 6 0.000031
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.1f( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.1f( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.f( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.006332 6 0.000084
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.f( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.f( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 125)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:49.150384+0000 osd.0 (osd.0) 124 : cluster [DBG] 11.14 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:49.160995+0000 osd.0 (osd.0) 125 : cluster [DBG] 11.14 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.1f( v 42'1151 lc 35'562 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002528 3 0.000174
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.1f( v 42'1151 lc 35'562 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.1f( v 42'1151 lc 35'562 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000036 1 0.000033
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.1f( v 42'1151 lc 35'562 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035816 1 0.000072
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.f( v 42'1151 lc 35'129 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.037949 3 0.000157
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.f( v 42'1151 lc 35'129 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.f( v 42'1151 lc 35'129 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000061 1 0.000057
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.f( v 42'1151 lc 35'129 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052761 1 0.000028
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 94 heartbeat osd_stat(store_statfs(0x4fca96000/0x0/0x4ffc00000, data 0x1102ef/0x185000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10(unlocked)] enter Initial
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=0 pi=[51,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=0 pi=[51,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000017
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000033
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000105 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 876544 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:20.914172+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:50.126421+0000 osd.0 (osd.0) 126 : cluster [DBG] 3.1c scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:50.137110+0000 osd.0 (osd.0) 127 : cluster [DBG] 3.1c scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 94 handle_osd_map epochs [94,95], i have 95, src has [1,95]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.923587 1 0.000057
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.014662 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.021073 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000044 1 0.000260
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000022 1 0.000028
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.859029 2 0.000048
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.859303 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.976427 1 0.000123
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.015293 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.020937 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000030 1 0.000375
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000024 1 0.000029
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.859317 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=0 lpr=94 pi=[51,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000203 1 0.000541
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000166 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=44
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=44
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000929 3 0.000136
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=33
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=33
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000873 3 0.000030
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 127)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:50.126421+0000 osd.0 (osd.0) 126 : cluster [DBG] 3.1c scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:50.137110+0000 osd.0 (osd.0) 127 : cluster [DBG] 3.1c scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 909312 heap: 74407936 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:21.914295+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:51.141182+0000 osd.0 (osd.0) 128 : cluster [DBG] 8.14 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:51.151777+0000 osd.0 (osd.0) 129 : cluster [DBG] 8.14 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 95 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001205 2 0.000058
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002139 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001362 2 0.000050
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002347 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/64 les/c/f=96/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001048 4 0.000085
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/64 les/c/f=96/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/64 les/c/f=96/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/64 les/c/f=96/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.10( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=2 mbc={}] exit Started/Stray 1.002901 6 0.000366
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.10( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.10( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/64 les/c/f=96/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001521 4 0.000058
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/64 les/c/f=96/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/64 les/c/f=96/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/64 les/c/f=96/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 129)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:51.141182+0000 osd.0 (osd.0) 128 : cluster [DBG] 8.14 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:51.151777+0000 osd.0 (osd.0) 129 : cluster [DBG] 8.14 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.10( v 42'1151 lc 35'412 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003120 3 0.000047
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.10( v 42'1151 lc 35'412 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.10( v 42'1151 lc 35'412 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000023 1 0.000039
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.10( v 42'1151 lc 35'412 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.014565 1 0.000017
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fca8b000/0x0/0x4ffc00000, data 0x1164ba/0x190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 1933312 heap: 75456512 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.722299 1 0.000017
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.740062 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 1.743178 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] r=-1 lpr=95 pi=[51,95)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000051 1 0.000083
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000029 1 0.000033
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000977 3 0.000029
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:22.914457+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:52.136853+0000 osd.0 (osd.0) 130 : cluster [DBG] 8.10 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:52.147445+0000 osd.0 (osd.0) 131 : cluster [DBG] 8.10 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 131)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:52.136853+0000 osd.0 (osd.0) 130 : cluster [DBG] 8.10 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:52.147445+0000 osd.0 (osd.0) 131 : cluster [DBG] 8.10 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 1916928 heap: 75456512 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000345 2 0.000040
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001392 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=97/98 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 98 handle_osd_map epochs [97,98], i have 98, src has [1,98]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=97/98 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=97/98 n=6 ec=51/29 lis/c=97/51 les/c/f=98/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001895 3 0.000089
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=97/98 n=6 ec=51/29 lis/c=97/51 les/c/f=98/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=97/98 n=6 ec=51/29 lis/c=97/51 les/c/f=98/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=97/98 n=6 ec=51/29 lis/c=97/51 les/c/f=98/52/0 sis=97) [0] r=0 lpr=97 pi=[51,97)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:23.914611+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:53.161073+0000 osd.0 (osd.0) 132 : cluster [DBG] 5.1 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:53.171655+0000 osd.0 (osd.0) 133 : cluster [DBG] 5.1 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 133)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:53.161073+0000 osd.0 (osd.0) 132 : cluster [DBG] 5.1 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:53.171655+0000 osd.0 (osd.0) 133 : cluster [DBG] 5.1 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 1908736 heap: 75456512 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 758754 data_alloc: 218103808 data_used: 180224
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:24.914742+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:54.167893+0000 osd.0 (osd.0) 134 : cluster [DBG] 11.f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:54.178501+0000 osd.0 (osd.0) 135 : cluster [DBG] 11.f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 135)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:54.167893+0000 osd.0 (osd.0) 134 : cluster [DBG] 11.f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:54.178501+0000 osd.0 (osd.0) 135 : cluster [DBG] 11.f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.8 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.8 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 1851392 heap: 75456512 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 98 heartbeat osd_stat(store_statfs(0x4fca83000/0x0/0x4ffc00000, data 0x11a413/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:25.914859+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:55.171658+0000 osd.0 (osd.0) 136 : cluster [DBG] 8.8 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:55.182049+0000 osd.0 (osd.0) 137 : cluster [DBG] 8.8 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 137)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:55.171658+0000 osd.0 (osd.0) 136 : cluster [DBG] 8.8 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:55.182049+0000 osd.0 (osd.0) 137 : cluster [DBG] 8.8 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 1843200 heap: 75456512 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:26.914987+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:56.128813+0000 osd.0 (osd.0) 138 : cluster [DBG] 3.f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:56.139544+0000 osd.0 (osd.0) 139 : cluster [DBG] 3.f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 98 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0x11a413/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 139)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:56.128813+0000 osd.0 (osd.0) 138 : cluster [DBG] 3.f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:56.139544+0000 osd.0 (osd.0) 139 : cluster [DBG] 3.f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 1843200 heap: 75456512 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:27.915116+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:57.079155+0000 osd.0 (osd.0) 140 : cluster [DBG] 5.9 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:57.089738+0000 osd.0 (osd.0) 141 : cluster [DBG] 5.9 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 141)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:57.079155+0000 osd.0 (osd.0) 140 : cluster [DBG] 5.9 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:57.089738+0000 osd.0 (osd.0) 141 : cluster [DBG] 5.9 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.764089584s of 10.828366280s, submitted: 81
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 1835008 heap: 75456512 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:28.915243+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:58.048520+0000 osd.0 (osd.0) 142 : cluster [DBG] 8.4 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:58.059013+0000 osd.0 (osd.0) 143 : cluster [DBG] 8.4 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 143)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:58.048520+0000 osd.0 (osd.0) 142 : cluster [DBG] 8.4 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:58.059013+0000 osd.0 (osd.0) 143 : cluster [DBG] 8.4 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.7 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.7 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 1826816 heap: 75456512 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766038 data_alloc: 218103808 data_used: 184320
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 99 heartbeat osd_stat(store_statfs(0x4fca82000/0x0/0x4ffc00000, data 0x11c4ff/0x199000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11(unlocked)] enter Initial
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=0 pi=[51,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000064 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=0 pi=[51,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000039
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000064 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000106 1 0.000160
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000056 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000228 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:29.915367+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:59.090110+0000 osd.0 (osd.0) 144 : cluster [DBG] 11.7 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:36:59.100696+0000 osd.0 (osd.0) 145 : cluster [DBG] 11.7 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 145)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:59.090110+0000 osd.0 (osd.0) 144 : cluster [DBG] 11.7 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:36:59.100696+0000 osd.0 (osd.0) 145 : cluster [DBG] 11.7 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.122057 2 0.000161
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.122366 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.122571 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=0 lpr=99 pi=[51,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000181 1 0.000353
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000060 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12(unlocked)] enter Initial
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=0 pi=[51,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=0 pi=[51,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000035
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000064 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000091 1 0.000152
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000031 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000169 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 100 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 761856 heap: 75456512 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:30.915483+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:00.079512+0000 osd.0 (osd.0) 146 : cluster [DBG] 5.f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:00.090225+0000 osd.0 (osd.0) 147 : cluster [DBG] 5.f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 147)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:00.079512+0000 osd.0 (osd.0) 146 : cluster [DBG] 5.f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:00.090225+0000 osd.0 (osd.0) 147 : cluster [DBG] 5.f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.004663 2 0.000094
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.004886 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.004998 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=0 lpr=100 pi=[51,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000337 1 0.000396
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000091 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.11( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.008934 6 0.000311
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.11( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.11( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.11( v 42'1151 lc 35'504 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001946 3 0.000121
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.11( v 42'1151 lc 35'504 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.11( v 42'1151 lc 35'504 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000043 1 0.000057
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.11( v 42'1151 lc 35'504 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035707 1 0.000036
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 679936 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:31.915687+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:01.110289+0000 osd.0 (osd.0) 148 : cluster [DBG] 11.1 scrub starts
Nov 25 10:03:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:01.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:01.120893+0000 osd.0 (osd.0) 149 : cluster [DBG] 11.1 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 149)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:01.110289+0000 osd.0 (osd.0) 148 : cluster [DBG] 11.1 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:01.120893+0000 osd.0 (osd.0) 149 : cluster [DBG] 11.1 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.974484 1 0.000060
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.012315 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.12( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.013100 5 0.000187
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.12( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.12( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.021427 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] r=-1 lpr=100 pi=[51,100)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000206 1 0.000340
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000042 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000156
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=33
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=33
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000783 3 0.000075
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.12( v 42'1151 lc 35'570 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002375 4 0.000109
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.12( v 42'1151 lc 35'570 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.12( v 42'1151 lc 35'570 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000030 1 0.000045
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.12( v 42'1151 lc 35'570 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028681 1 0.000019
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 647168 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.657427 2 0.000056
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.658331 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.627503 1 0.000054
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.658714 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 1.671954 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] r=-1 lpr=101 pi=[51,101)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=102/103 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000055 1 0.000127
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000024 1 0.000029
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 103 handle_osd_map epochs [102,103], i have 103, src has [1,103]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=27
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=102/103 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=27
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001327 3 0.000034
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=102/103 n=6 ec=51/29 lis/c=102/51 les/c/f=103/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001621 4 0.000315
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=102/103 n=6 ec=51/29 lis/c=102/51 les/c/f=103/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=102/103 n=6 ec=51/29 lis/c=102/51 les/c/f=103/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=102/103 n=6 ec=51/29 lis/c=102/51 les/c/f=103/52/0 sis=102) [0] r=0 lpr=102 pi=[51,102)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:32.915884+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:02.146882+0000 osd.0 (osd.0) 150 : cluster [DBG] 4.c scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:02.157671+0000 osd.0 (osd.0) 151 : cluster [DBG] 4.c scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 151)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:02.146882+0000 osd.0 (osd.0) 150 : cluster [DBG] 4.c scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:02.157671+0000 osd.0 (osd.0) 151 : cluster [DBG] 4.c scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 638976 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 103 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002826 2 0.000058
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004231 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=103/104 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=103/104 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=103/104 n=6 ec=51/29 lis/c=103/51 les/c/f=104/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001437 4 0.000162
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=103/104 n=6 ec=51/29 lis/c=103/51 les/c/f=104/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=103/104 n=6 ec=51/29 lis/c=103/51 les/c/f=104/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=103/104 n=6 ec=51/29 lis/c=103/51 les/c/f=104/52/0 sis=103) [0] r=0 lpr=103 pi=[51,103)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:33.916052+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:03.119581+0000 osd.0 (osd.0) 152 : cluster [DBG] 4.d scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:03.130285+0000 osd.0 (osd.0) 153 : cluster [DBG] 4.d scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 153)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:03.119581+0000 osd.0 (osd.0) 152 : cluster [DBG] 4.d scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:03.130285+0000 osd.0 (osd.0) 153 : cluster [DBG] 4.d scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 630784 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802986 data_alloc: 218103808 data_used: 184320
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:34.916176+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:04.073698+0000 osd.0 (osd.0) 154 : cluster [DBG] 4.a scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:04.084322+0000 osd.0 (osd.0) 155 : cluster [DBG] 4.a scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 155)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:04.073698+0000 osd.0 (osd.0) 154 : cluster [DBG] 4.a scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:04.084322+0000 osd.0 (osd.0) 155 : cluster [DBG] 4.a scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 104 heartbeat osd_stat(store_statfs(0x4fca71000/0x0/0x4ffc00000, data 0x126627/0x1aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 630784 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 104 heartbeat osd_stat(store_statfs(0x4fca71000/0x0/0x4ffc00000, data 0x126627/0x1aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:35.916298+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:05.072786+0000 osd.0 (osd.0) 156 : cluster [DBG] 11.5 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:05.083384+0000 osd.0 (osd.0) 157 : cluster [DBG] 11.5 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 157)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:05.072786+0000 osd.0 (osd.0) 156 : cluster [DBG] 11.5 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:05.083384+0000 osd.0 (osd.0) 157 : cluster [DBG] 11.5 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:36.916442+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:06.088886+0000 osd.0 (osd.0) 158 : cluster [DBG] 8.1b scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:06.099495+0000 osd.0 (osd.0) 159 : cluster [DBG] 8.1b scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 159)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:06.088886+0000 osd.0 (osd.0) 158 : cluster [DBG] 8.1b scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:06.099495+0000 osd.0 (osd.0) 159 : cluster [DBG] 8.1b scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:37.916562+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:07.120213+0000 osd.0 (osd.0) 160 : cluster [DBG] 5.16 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:07.130897+0000 osd.0 (osd.0) 161 : cluster [DBG] 5.16 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 161)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:07.120213+0000 osd.0 (osd.0) 160 : cluster [DBG] 5.16 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:07.130897+0000 osd.0 (osd.0) 161 : cluster [DBG] 5.16 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.027116776s of 10.073734283s, submitted: 101
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 606208 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:38.916699+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:08.169083+0000 osd.0 (osd.0) 162 : cluster [DBG] 5.7 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:08.179674+0000 osd.0 (osd.0) 163 : cluster [DBG] 5.7 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 163)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:08.169083+0000 osd.0 (osd.0) 162 : cluster [DBG] 5.7 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:08.179674+0000 osd.0 (osd.0) 163 : cluster [DBG] 5.7 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 606208 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811651 data_alloc: 218103808 data_used: 184320
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:39.916810+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:09.182513+0000 osd.0 (osd.0) 164 : cluster [DBG] 11.4 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:09.193139+0000 osd.0 (osd.0) 165 : cluster [DBG] 11.4 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 165)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:09.182513+0000 osd.0 (osd.0) 164 : cluster [DBG] 11.4 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:09.193139+0000 osd.0 (osd.0) 165 : cluster [DBG] 11.4 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 598016 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:40.916957+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:10.198765+0000 osd.0 (osd.0) 166 : cluster [DBG] 5.2 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:10.209374+0000 osd.0 (osd.0) 167 : cluster [DBG] 5.2 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 167)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:10.198765+0000 osd.0 (osd.0) 166 : cluster [DBG] 5.2 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:10.209374+0000 osd.0 (osd.0) 167 : cluster [DBG] 5.2 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fca6b000/0x0/0x4ffc00000, data 0x12a7ff/0x1b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 598016 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:41.917114+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:11.196923+0000 osd.0 (osd.0) 168 : cluster [DBG] 11.1b scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:11.207449+0000 osd.0 (osd.0) 169 : cluster [DBG] 11.1b scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 169)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:11.196923+0000 osd.0 (osd.0) 168 : cluster [DBG] 11.1b scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:11.207449+0000 osd.0 (osd.0) 169 : cluster [DBG] 11.1b scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15(unlocked)] enter Initial
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=0 pi=[70,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000048 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=0 pi=[70,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000023
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000127 1 0.000039
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000174 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.18 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.18 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.595817 2 0.000056
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.596129 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.596165 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=0 lpr=107 pi=[70,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000281 1 0.000568
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000101 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 108 handle_osd_map epochs [107,108], i have 108, src has [1,108]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:42.917274+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:12.226836+0000 osd.0 (osd.0) 170 : cluster [DBG] 8.18 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:12.237477+0000 osd.0 (osd.0) 171 : cluster [DBG] 8.18 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 171)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:12.226836+0000 osd.0 (osd.0) 170 : cluster [DBG] 8.18 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:12.237477+0000 osd.0 (osd.0) 171 : cluster [DBG] 8.18 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 565248 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:43.917440+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:13.187750+0000 osd.0 (osd.0) 172 : cluster [DBG] 3.13 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:13.198316+0000 osd.0 (osd.0) 173 : cluster [DBG] 3.13 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 173)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:13.187750+0000 osd.0 (osd.0) 172 : cluster [DBG] 3.13 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:13.198316+0000 osd.0 (osd.0) 173 : cluster [DBG] 3.13 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _renew_subs
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.15( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.456926 5 0.000557
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.15( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.15( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=72) [0] r=0 lpr=72 crt=42'1151 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 55.464988 111 0.000787
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=72) [0] r=0 lpr=72 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active 55.468206 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=72) [0] r=0 lpr=72 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary 56.467887 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=72) [0] r=0 lpr=72 crt=42'1151 mlcod 0'0 active mbc={}] exit Started 56.467941 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=72) [0] r=0 lpr=72 crt=42'1151 mlcod 0'0 active mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109 pruub=8.535896301s) [2] r=-1 lpr=109 pi=[72,109)/1 crt=42'1151 mlcod 0'0 active pruub 293.832427979s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109 pruub=8.535872459s) [2] r=-1 lpr=109 pi=[72,109)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 293.832427979s@ mbc={}] exit Reset 0.000047 1 0.000083
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109 pruub=8.535872459s) [2] r=-1 lpr=109 pi=[72,109)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 293.832427979s@ mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109 pruub=8.535872459s) [2] r=-1 lpr=109 pi=[72,109)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 293.832427979s@ mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109 pruub=8.535872459s) [2] r=-1 lpr=109 pi=[72,109)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 293.832427979s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109 pruub=8.535872459s) [2] r=-1 lpr=109 pi=[72,109)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 293.832427979s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109 pruub=8.535872459s) [2] r=-1 lpr=109 pi=[72,109)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 293.832427979s@ mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 109 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.15( v 42'1151 lc 35'486 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001951 4 0.000126
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.15( v 42'1151 lc 35'486 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.15( v 42'1151 lc 35'486 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000039 1 0.000049
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.15( v 42'1151 lc 35'486 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.032601 1 0.000034
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 516096 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837560 data_alloc: 218103808 data_used: 184320
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:44.917633+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:14.164375+0000 osd.0 (osd.0) 174 : cluster [DBG] 5.15 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:14.178446+0000 osd.0 (osd.0) 175 : cluster [DBG] 5.15 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _renew_subs
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=-1 lpr=109 pi=[72,109)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.962832 3 0.000025
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.928360 1 0.000041
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.963135 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.420352 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[70,108)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=-1 lpr=109 pi=[72,109)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.962976 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=-1 lpr=109 pi=[72,109)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Reset 0.000119 1 0.000294
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Start 0.000025 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000215 1 0.000344
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000125
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 1 0.000265
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000182 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 175)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:14.164375+0000 osd.0 (osd.0) 174 : cluster [DBG] 5.15 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:14.178446+0000 osd.0 (osd.0) 175 : cluster [DBG] 5.15 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: merge_log_dups log.dups.size()=0olog.dups.size()=27
Nov 25 10:03:01 compute-1 ceph-osd[77354]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=27
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002158 3 0.000091
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000028 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 491520 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:45.917804+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:15.128895+0000 osd.0 (osd.0) 176 : cluster [DBG] 11.1a scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:15.139459+0000 osd.0 (osd.0) 177 : cluster [DBG] 11.1a scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998647 2 0.000333
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001046 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000920 4 0.000271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001258 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/70 les/c/f=111/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001098 3 0.000117
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/70 les/c/f=111/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/70 les/c/f=111/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/70 les/c/f=111/71/0 sis=110) [0] r=0 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 177)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:15.128895+0000 osd.0 (osd.0) 176 : cluster [DBG] 11.1a scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:15.139459+0000 osd.0 (osd.0) 177 : cluster [DBG] 11.1a scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 475136 heap: 76505088 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 111 heartbeat osd_stat(store_statfs(0x4fca5b000/0x0/0x4ffc00000, data 0x134b30/0x1c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.772785 5 0.000230
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000077 1 0.000065
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000483 1 0.000032
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:46.917935+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:16.159404+0000 osd.0 (osd.0) 178 : cluster [DBG] 3.10 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:16.169843+0000 osd.0 (osd.0) 179 : cluster [DBG] 3.10 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.055072 2 0.000035
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.13 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 179)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:16.159404+0000 osd.0 (osd.0) 178 : cluster [DBG] 3.10 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:16.169843+0000 osd.0 (osd.0) 179 : cluster [DBG] 3.10 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 111 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.201223 1 0.000119
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active 1.029825 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary 2.031238 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started 2.031288 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112 pruub=15.742835999s) [2] async=[2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 42'1151 active pruub 304.033843994s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112 pruub=15.742785454s) [2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 304.033843994s@ mbc={}] exit Reset 0.000083 1 0.000129
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112 pruub=15.742785454s) [2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 304.033843994s@ mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112 pruub=15.742785454s) [2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 304.033843994s@ mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112 pruub=15.742785454s) [2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 304.033843994s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112 pruub=15.742785454s) [2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 304.033843994s@ mbc={}] exit Start 0.000011 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112 pruub=15.742785454s) [2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 304.033843994s@ mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.13 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1490944 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:47.918060+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:17.154925+0000 osd.0 (osd.0) 180 : cluster [DBG] 4.13 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:17.165452+0000 osd.0 (osd.0) 181 : cluster [DBG] 4.13 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.020062447s of 10.058932304s, submitted: 57
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 181)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:17.154925+0000 osd.0 (osd.0) 180 : cluster [DBG] 4.13 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:17.165452+0000 osd.0 (osd.0) 181 : cluster [DBG] 4.13 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.010403 7 0.000080
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000045 1 0.000049
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=-1 lpr=112 DELETING pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030710 2 0.000115
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.030780 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=-1 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.041224 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 1458176 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:48.918205+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:18.142143+0000 osd.0 (osd.0) 182 : cluster [DBG] 11.1c deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:18.152744+0000 osd.0 (osd.0) 183 : cluster [DBG] 11.1c deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 183)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:18.142143+0000 osd.0 (osd.0) 182 : cluster [DBG] 11.1c deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:18.152744+0000 osd.0 (osd.0) 183 : cluster [DBG] 11.1c deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x138a1b/0x1c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 1458176 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 845693 data_alloc: 218103808 data_used: 184320
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:49.918353+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:19.116948+0000 osd.0 (osd.0) 184 : cluster [DBG] 5.10 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:19.127516+0000 osd.0 (osd.0) 185 : cluster [DBG] 5.10 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 185)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:19.116948+0000 osd.0 (osd.0) 184 : cluster [DBG] 5.10 deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:19.127516+0000 osd.0 (osd.0) 185 : cluster [DBG] 5.10 deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 1458176 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:50.918447+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:20.166421+0000 osd.0 (osd.0) 186 : cluster [DBG] 11.1e scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:20.177006+0000 osd.0 (osd.0) 187 : cluster [DBG] 11.1e scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 187)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:20.166421+0000 osd.0 (osd.0) 186 : cluster [DBG] 11.1e scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:20.177006+0000 osd.0 (osd.0) 187 : cluster [DBG] 11.1e scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1417216 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:51.918593+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:21.123902+0000 osd.0 (osd.0) 188 : cluster [DBG] 3.14 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:21.134543+0000 osd.0 (osd.0) 189 : cluster [DBG] 3.14 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 113 ms_handle_reset con 0x5584d1d57400 session 0x5584d3eec1e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 189)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:21.123902+0000 osd.0 (osd.0) 188 : cluster [DBG] 3.14 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:21.134543+0000 osd.0 (osd.0) 189 : cluster [DBG] 3.14 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1409024 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:52.918716+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:22.141370+0000 osd.0 (osd.0) 190 : cluster [DBG] 3.16 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:22.151689+0000 osd.0 (osd.0) 191 : cluster [DBG] 3.16 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 191)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:22.141370+0000 osd.0 (osd.0) 190 : cluster [DBG] 3.16 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:22.151689+0000 osd.0 (osd.0) 191 : cluster [DBG] 3.16 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1400832 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:53.918837+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:23.118034+0000 osd.0 (osd.0) 192 : cluster [DBG] 4.e scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:23.128731+0000 osd.0 (osd.0) 193 : cluster [DBG] 4.e scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 113 handle_osd_map epochs [114,115], i have 113, src has [1,115]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 115 handle_osd_map epochs [115,116], i have 115, src has [1,116]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 193)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:23.118034+0000 osd.0 (osd.0) 192 : cluster [DBG] 4.e scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:23.128731+0000 osd.0 (osd.0) 193 : cluster [DBG] 4.e scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1376256 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862319 data_alloc: 218103808 data_used: 184320
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:54.918970+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:24.070290+0000 osd.0 (osd.0) 194 : cluster [DBG] 5.1f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:24.080995+0000 osd.0 (osd.0) 195 : cluster [DBG] 5.1f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 116 heartbeat osd_stat(store_statfs(0x4fca4c000/0x0/0x4ffc00000, data 0x13ebc7/0x1ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 117 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=85) [0] r=0 lpr=85 crt=42'1151 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 49.346086 96 0.000380
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 117 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=85) [0] r=0 lpr=85 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active 49.347835 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 117 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=85) [0] r=0 lpr=85 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary 50.354544 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 117 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=85) [0] r=0 lpr=85 crt=42'1151 mlcod 0'0 active mbc={}] exit Started 50.354777 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 117 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=85) [0] r=0 lpr=85 crt=42'1151 mlcod 0'0 active mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 117 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117 pruub=14.654538155s) [1] r=-1 lpr=117 pi=[85,117)/1 crt=42'1151 mlcod 0'0 active pruub 310.990783691s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 117 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117 pruub=14.654507637s) [1] r=-1 lpr=117 pi=[85,117)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 310.990783691s@ mbc={}] exit Reset 0.000057 1 0.000103
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 117 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117 pruub=14.654507637s) [1] r=-1 lpr=117 pi=[85,117)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 310.990783691s@ mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 117 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117 pruub=14.654507637s) [1] r=-1 lpr=117 pi=[85,117)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 310.990783691s@ mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 117 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117 pruub=14.654507637s) [1] r=-1 lpr=117 pi=[85,117)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 310.990783691s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 117 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117 pruub=14.654507637s) [1] r=-1 lpr=117 pi=[85,117)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 310.990783691s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 117 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117 pruub=14.654507637s) [1] r=-1 lpr=117 pi=[85,117)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 310.990783691s@ mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 195)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:24.070290+0000 osd.0 (osd.0) 194 : cluster [DBG] 5.1f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:24.080995+0000 osd.0 (osd.0) 195 : cluster [DBG] 5.1f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1376256 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:55.919122+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:25.043931+0000 osd.0 (osd.0) 196 : cluster [DBG] 5.11 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:25.054235+0000 osd.0 (osd.0) 197 : cluster [DBG] 5.11 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fca4a000/0x0/0x4ffc00000, data 0x140cd4/0x1d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=-1 lpr=117 pi=[85,117)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012084 3 0.000029
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=-1 lpr=117 pi=[85,117)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.012115 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=-1 lpr=117 pi=[85,117)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Reset 0.000043 1 0.000068
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000028 1 0.000035
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 118 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 197)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:25.043931+0000 osd.0 (osd.0) 196 : cluster [DBG] 5.11 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:25.054235+0000 osd.0 (osd.0) 197 : cluster [DBG] 5.11 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1351680 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:56.919254+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:26.050171+0000 osd.0 (osd.0) 198 : cluster [DBG] 11.1d scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:26.060761+0000 osd.0 (osd.0) 199 : cluster [DBG] 11.1d scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002794 4 0.000047
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002881 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=85/86 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 199)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:26.050171+0000 osd.0 (osd.0) 198 : cluster [DBG] 11.1d scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:26.060761+0000 osd.0 (osd.0) 199 : cluster [DBG] 11.1d scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.002785 5 0.000589
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000047 1 0.000041
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000503 1 0.000038
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.029016 2 0.000056
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 1335296 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 119 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.461713 1 0.000134
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active 0.494424 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary 1.497348 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started 1.497378 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120 pruub=15.508149147s) [1] async=[1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 42'1151 active pruub 314.354034424s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120 pruub=15.507986069s) [1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 314.354034424s@ mbc={}] exit Reset 0.000201 1 0.000290
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120 pruub=15.507986069s) [1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 314.354034424s@ mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120 pruub=15.507986069s) [1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 314.354034424s@ mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120 pruub=15.507986069s) [1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 314.354034424s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120 pruub=15.507986069s) [1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 314.354034424s@ mbc={}] exit Start 0.000083 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120 pruub=15.507986069s) [1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 314.354034424s@ mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:57.919393+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:27.062577+0000 osd.0 (osd.0) 200 : cluster [DBG] 8.19 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:27.076736+0000 osd.0 (osd.0) 201 : cluster [DBG] 8.19 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 201)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:27.062577+0000 osd.0 (osd.0) 200 : cluster [DBG] 8.19 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:27.076736+0000 osd.0 (osd.0) 201 : cluster [DBG] 8.19 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 1277952 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:58.919549+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:28.022194+0000 osd.0 (osd.0) 202 : cluster [DBG] 6.3 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:28.039840+0000 osd.0 (osd.0) 203 : cluster [DBG] 6.3 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _renew_subs
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.814392090s of 10.857058525s, submitted: 53
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.285209 6 0.000257
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001088 2 0.000087
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=-1 lpr=120 DELETING pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044666 2 0.000094
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.045811 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=-1 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.331177 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 203)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:28.022194+0000 osd.0 (osd.0) 202 : cluster [DBG] 6.3 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:28.039840+0000 osd.0 (osd.0) 203 : cluster [DBG] 6.3 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1269760 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875103 data_alloc: 218103808 data_used: 196608
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:59.919698+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:28.999179+0000 osd.0 (osd.0) 204 : cluster [DBG] 6.2 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:29.017227+0000 osd.0 (osd.0) 205 : cluster [DBG] 6.2 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 205)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:28.999179+0000 osd.0 (osd.0) 204 : cluster [DBG] 6.2 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:29.017227+0000 osd.0 (osd.0) 205 : cluster [DBG] 6.2 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1269760 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 121 heartbeat osd_stat(store_statfs(0x4fca3e000/0x0/0x4ffc00000, data 0x148b73/0x1dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:00.919815+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:29.963394+0000 osd.0 (osd.0) 206 : cluster [DBG] 6.5 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:29.981089+0000 osd.0 (osd.0) 207 : cluster [DBG] 6.5 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 207)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:29.963394+0000 osd.0 (osd.0) 206 : cluster [DBG] 6.5 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:29.981089+0000 osd.0 (osd.0) 207 : cluster [DBG] 6.5 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 1245184 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:01.919928+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:31.001551+0000 osd.0 (osd.0) 208 : cluster [DBG] 6.a scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:31.012164+0000 osd.0 (osd.0) 209 : cluster [DBG] 6.a scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fca3c000/0x0/0x4ffc00000, data 0x14ac5f/0x1df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 209)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:31.001551+0000 osd.0 (osd.0) 208 : cluster [DBG] 6.a scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:31.012164+0000 osd.0 (osd.0) 209 : cluster [DBG] 6.a scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1236992 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:02.920074+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 211 sent 209 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:31.994113+0000 osd.0 (osd.0) 210 : cluster [DBG] 6.7 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:32.008366+0000 osd.0 (osd.0) 211 : cluster [DBG] 6.7 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 211)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:31.994113+0000 osd.0 (osd.0) 210 : cluster [DBG] 6.7 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:32.008366+0000 osd.0 (osd.0) 211 : cluster [DBG] 6.7 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 122 handle_osd_map epochs [123,124], i have 122, src has [1,124]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1228800 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:03.920234+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 213 sent 211 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:32.951347+0000 osd.0 (osd.0) 212 : cluster [DBG] 6.d scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:32.969041+0000 osd.0 (osd.0) 213 : cluster [DBG] 6.d scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 213)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:32.951347+0000 osd.0 (osd.0) 212 : cluster [DBG] 6.d scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:32.969041+0000 osd.0 (osd.0) 213 : cluster [DBG] 6.d scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fca35000/0x0/0x4ffc00000, data 0x14ed40/0x1e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 124 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 1318912 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 891930 data_alloc: 218103808 data_used: 200704
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:04.920364+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 3 last_log 216 sent 213 num 3 unsent 3 sending 3
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:33.948530+0000 osd.0 (osd.0) 214 : cluster [DBG] 6.e scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:33.962747+0000 osd.0 (osd.0) 215 : cluster [DBG] 6.e scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:34.912638+0000 osd.0 (osd.0) 216 : cluster [DBG] 6.8 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 216)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:33.948530+0000 osd.0 (osd.0) 214 : cluster [DBG] 6.e scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:33.962747+0000 osd.0 (osd.0) 215 : cluster [DBG] 6.e scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:34.912638+0000 osd.0 (osd.0) 216 : cluster [DBG] 6.8 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 126 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=90) [0] r=0 lpr=90 crt=42'1151 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 53.494371 105 0.000291
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 126 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=90) [0] r=0 lpr=90 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active 53.495755 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 126 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=90) [0] r=0 lpr=90 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary 54.500943 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 126 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=90) [0] r=0 lpr=90 crt=42'1151 mlcod 0'0 active mbc={}] exit Started 54.500973 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 126 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=90) [0] r=0 lpr=90 crt=42'1151 mlcod 0'0 active mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 126 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126 pruub=10.506638527s) [2] r=-1 lpr=126 pi=[90,126)/1 crt=42'1151 mlcod 0'0 active pruub 317.039733887s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 126 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126 pruub=10.506608963s) [2] r=-1 lpr=126 pi=[90,126)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 317.039733887s@ mbc={}] exit Reset 0.000056 1 0.000116
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 126 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126 pruub=10.506608963s) [2] r=-1 lpr=126 pi=[90,126)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 317.039733887s@ mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 126 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126 pruub=10.506608963s) [2] r=-1 lpr=126 pi=[90,126)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 317.039733887s@ mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 126 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126 pruub=10.506608963s) [2] r=-1 lpr=126 pi=[90,126)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 317.039733887s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 126 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126 pruub=10.506608963s) [2] r=-1 lpr=126 pi=[90,126)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 317.039733887s@ mbc={}] exit Start 0.000007 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 126 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126 pruub=10.506608963s) [2] r=-1 lpr=126 pi=[90,126)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 317.039733887s@ mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 126 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 1302528 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:05.920534+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 3 last_log 219 sent 216 num 3 unsent 3 sending 3
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:34.923215+0000 osd.0 (osd.0) 217 : cluster [DBG] 6.8 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:35.874759+0000 osd.0 (osd.0) 218 : cluster [DBG] 9.10 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:35.892443+0000 osd.0 (osd.0) 219 : cluster [DBG] 9.10 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 219)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:34.923215+0000 osd.0 (osd.0) 217 : cluster [DBG] 6.8 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:35.874759+0000 osd.0 (osd.0) 218 : cluster [DBG] 9.10 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:35.892443+0000 osd.0 (osd.0) 219 : cluster [DBG] 9.10 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=-1 lpr=126 pi=[90,126)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.895196 3 0.000034
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=-1 lpr=126 pi=[90,126)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.895234 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=-1 lpr=126 pi=[90,126)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Reset 0.000123 1 0.000152
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Start 0.000066 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000098 1 0.000269
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000078 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000029 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 127 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 1294336 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:06.920649+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 1 last_log 220 sent 219 num 1 unsent 1 sending 1
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:36.907723+0000 osd.0 (osd.0) 220 : cluster [DBG] 9.11 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 220)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:36.907723+0000 osd.0 (osd.0) 220 : cluster [DBG] 9.11 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008443 4 0.000373
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.008992 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=90/91 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1261568 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 128 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.586383 5 0.000644
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000085 1 0.000079
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000377 1 0.000033
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:07.920752+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 1 last_log 221 sent 220 num 1 unsent 1 sending 1
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:36.935962+0000 osd.0 (osd.0) 221 : cluster [DBG] 9.11 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.037269 2 0.000054
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 221)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:36.935962+0000 osd.0 (osd.0) 221 : cluster [DBG] 9.11 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.380397 1 0.000051
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004809 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary 2.013843 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started 2.013961 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] async=[2] r=0 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129 pruub=15.581406593s) [2] async=[2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 42'1151 active pruub 325.023925781s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129 pruub=15.581371307s) [2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 325.023925781s@ mbc={}] exit Reset 0.000055 1 0.000086
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129 pruub=15.581371307s) [2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 325.023925781s@ mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129 pruub=15.581371307s) [2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 325.023925781s@ mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129 pruub=15.581371307s) [2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 325.023925781s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129 pruub=15.581371307s) [2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 325.023925781s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129 pruub=15.581371307s) [2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 325.023925781s@ mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1f17000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1261568 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:08.920890+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fca26000/0x0/0x4ffc00000, data 0x158d31/0x1f4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.281042099s of 10.319630623s, submitted: 50
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020694 7 0.000094
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000066 1 0.000043
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=-1 lpr=129 DELETING pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038253 2 0.000183
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038377 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=-1 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.059109 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1155072 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905855 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:09.921040+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1155072 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:10.921135+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1155072 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:11.921238+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 1146880 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:12.921441+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fca24000/0x0/0x4ffc00000, data 0x15abf7/0x1f5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 130 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=72) [0] r=0 lpr=72 crt=42'1151 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 84.402867 177 0.000593
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=72) [0] r=0 lpr=72 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active 84.405415 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=72) [0] r=0 lpr=72 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary 85.405581 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=72) [0] r=0 lpr=72 crt=42'1151 mlcod 0'0 active mbc={}] exit Started 85.405694 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=72) [0] r=0 lpr=72 crt=42'1151 mlcod 0'0 active mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131 pruub=11.598611832s) [1] r=-1 lpr=131 pi=[72,131)/1 crt=42'1151 mlcod 0'0 active pruub 325.833038330s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131 pruub=11.598580360s) [1] r=-1 lpr=131 pi=[72,131)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 325.833038330s@ mbc={}] exit Reset 0.000059 1 0.000101
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131 pruub=11.598580360s) [1] r=-1 lpr=131 pi=[72,131)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 325.833038330s@ mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131 pruub=11.598580360s) [1] r=-1 lpr=131 pi=[72,131)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 325.833038330s@ mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131 pruub=11.598580360s) [1] r=-1 lpr=131 pi=[72,131)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 325.833038330s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131 pruub=11.598580360s) [1] r=-1 lpr=131 pi=[72,131)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 325.833038330s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131 pruub=11.598580360s) [1] r=-1 lpr=131 pi=[72,131)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 325.833038330s@ mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=95) [0] r=0 lpr=95 crt=42'1151 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 51.688966 108 0.000483
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=-1 lpr=131 pi=[72,131)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.548944 3 0.000048
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=-1 lpr=131 pi=[72,131)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.548980 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=-1 lpr=131 pi=[72,131)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=95) [0] r=0 lpr=95 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active 51.690082 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Reset 0.000057 1 0.000082
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=95) [0] r=0 lpr=95 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary 52.692346 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=95) [0] r=0 lpr=95 crt=42'1151 mlcod 0'0 active mbc={}] exit Started 52.692378 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=95) [0] r=0 lpr=95 crt=42'1151 mlcod 0'0 active mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132 pruub=12.311533928s) [1] r=-1 lpr=132 pi=[95,132)/1 crt=42'1151 mlcod 0'0 active pruub 327.095214844s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132 pruub=12.311475754s) [1] r=-1 lpr=132 pi=[95,132)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 327.095214844s@ mbc={}] exit Reset 0.000161 1 0.000345
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132 pruub=12.311475754s) [1] r=-1 lpr=132 pi=[95,132)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 327.095214844s@ mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132 pruub=12.311475754s) [1] r=-1 lpr=132 pi=[95,132)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 327.095214844s@ mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132 pruub=12.311475754s) [1] r=-1 lpr=132 pi=[95,132)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 327.095214844s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132 pruub=12.311475754s) [1] r=-1 lpr=132 pi=[95,132)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 327.095214844s@ mbc={}] exit Start 0.000058 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132 pruub=12.311475754s) [1] r=-1 lpr=132 pi=[95,132)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 327.095214844s@ mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001669 2 0.000044
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 1122304 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:13.921591+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 223 sent 221 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:42.984302+0000 osd.0 (osd.0) 222 : cluster [DBG] 9.12 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:43.009048+0000 osd.0 (osd.0) 223 : cluster [DBG] 9.12 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=-1 lpr=132 pi=[95,132)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.004923 3 0.000127
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=-1 lpr=132 pi=[95,132)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.005087 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003605 3 0.000073
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.005353 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=72/73 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=-1 lpr=132 pi=[95,132)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 223)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:42.984302+0000 osd.0 (osd.0) 222 : cluster [DBG] 9.12 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:43.009048+0000 osd.0 (osd.0) 223 : cluster [DBG] 9.12 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Reset 0.000479 1 0.000618
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Start 0.000092 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 133 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000575 2 0.000221
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.002521 5 0.000206
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000061 1 0.000058
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000550 1 0.000140
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035403 2 0.000065
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 1105920 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916338 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:14.921746+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 225 sent 223 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:43.934582+0000 osd.0 (osd.0) 224 : cluster [DBG] 9.15 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:43.959444+0000 osd.0 (osd.0) 225 : cluster [DBG] 9.15 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 225)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:43.934582+0000 osd.0 (osd.0) 224 : cluster [DBG] 9.15 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:43.959444+0000 osd.0 (osd.0) 225 : cluster [DBG] 9.15 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 133 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012803 3 0.000133
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.013508 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.975433 1 0.000070
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active 1.014209 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary 2.019593 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started 2.019634 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] async=[1] r=0 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134 pruub=14.988237381s) [1] async=[1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 42'1151 active pruub 331.791442871s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134 pruub=14.988134384s) [1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 331.791442871s@ mbc={}] exit Reset 0.000137 1 0.000233
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134 pruub=14.988134384s) [1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 331.791442871s@ mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134 pruub=14.988134384s) [1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 331.791442871s@ mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134 pruub=14.988134384s) [1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 331.791442871s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134 pruub=14.988134384s) [1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 331.791442871s@ mbc={}] exit Start 0.000040 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134 pruub=14.988134384s) [1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 331.791442871s@ mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.003816 5 0.000239
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000048 1 0.000055
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000281 1 0.000080
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035373 2 0.000028
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1089536 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:15.921876+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 3 last_log 228 sent 225 num 3 unsent 3 sending 3
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:44.928015+0000 osd.0 (osd.0) 226 : cluster [DBG] 9.d scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:44.966850+0000 osd.0 (osd.0) 227 : cluster [DBG] 9.d scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:45.907865+0000 osd.0 (osd.0) 228 : cluster [DBG] 9.f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 228)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:44.928015+0000 osd.0 (osd.0) 226 : cluster [DBG] 9.d scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:44.966850+0000 osd.0 (osd.0) 227 : cluster [DBG] 9.d scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:45.907865+0000 osd.0 (osd.0) 228 : cluster [DBG] 9.f scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.969091 1 0.000046
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active 1.008803 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary 2.022337 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started 2.022478 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Reset
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135 pruub=14.994965553s) [1] async=[1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 42'1151 active pruub 332.806915283s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135 pruub=14.994921684s) [1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 332.806915283s@ mbc={}] exit Reset 0.000072 1 0.000109
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135 pruub=14.994921684s) [1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 332.806915283s@ mbc={}] enter Started
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135 pruub=14.994921684s) [1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 332.806915283s@ mbc={}] enter Start
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135 pruub=14.994921684s) [1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 332.806915283s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135 pruub=14.994921684s) [1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 332.806915283s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135 pruub=14.994921684s) [1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 332.806915283s@ mbc={}] enter Started/Stray
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 135 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.011744 7 0.000138
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000045 1 0.000044
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 983040 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=-1 lpr=134 DELETING pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038997 2 0.000121
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.039079 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=-1 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.050913 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.a deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:16.921987+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 2 last_log 230 sent 228 num 2 unsent 2 sending 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:45.943197+0000 osd.0 (osd.0) 229 : cluster [DBG] 9.f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:46.917837+0000 osd.0 (osd.0) 230 : cluster [DBG] 9.a deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.a deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 230)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:45.943197+0000 osd.0 (osd.0) 229 : cluster [DBG] 9.f scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:46.917837+0000 osd.0 (osd.0) 230 : cluster [DBG] 9.a deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.011904 7 0.000107
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000050 1 0.000045
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 974848 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=-1 lpr=135 DELETING pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038031 2 0.000112
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038113 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=-1 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.050053 0 0.000000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:17.922141+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 1 last_log 231 sent 230 num 1 unsent 1 sending 1
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:46.960198+0000 osd.0 (osd.0) 231 : cluster [DBG] 9.a deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.e deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.e deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 231)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:46.960198+0000 osd.0 (osd.0) 231 : cluster [DBG] 9.a deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 966656 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:18.922276+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  log_queue is 4 last_log 235 sent 231 num 4 unsent 4 sending 4
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:47.927297+0000 osd.0 (osd.0) 232 : cluster [DBG] 9.e deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:47.955552+0000 osd.0 (osd.0) 233 : cluster [DBG] 9.e deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:48.880806+0000 osd.0 (osd.0) 234 : cluster [DBG] 9.6 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  will send 2025-11-25T09:37:48.912594+0000 osd.0 (osd.0) 235 : cluster [DBG] 9.6 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca14000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client handle_log_ack log(last 235)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:47.927297+0000 osd.0 (osd.0) 232 : cluster [DBG] 9.e deep-scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:47.955552+0000 osd.0 (osd.0) 233 : cluster [DBG] 9.e deep-scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:48.880806+0000 osd.0 (osd.0) 234 : cluster [DBG] 9.6 scrub starts
Nov 25 10:03:01 compute-1 ceph-osd[77354]: log_client  logged 2025-11-25T09:37:48.912594+0000 osd.0 (osd.0) 235 : cluster [DBG] 9.6 scrub ok
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 966656 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910488 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:19.922464+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 958464 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:20.922636+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 958464 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:21.922777+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 950272 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:22.922932+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 950272 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:23.923074+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 942080 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910488 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:24.923176+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d2820400 session 0x5584d2bd45a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d1f17800 session 0x5584d39ac5a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 942080 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:25.923281+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 917504 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:26.923432+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 909312 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:27.923534+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 909312 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:28.923644+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 909312 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910488 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:29.923752+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 901120 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:30.923847+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 901120 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:31.923946+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 892928 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:32.924065+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 892928 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:33.924172+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d1f17000 session 0x5584d472fe00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 884736 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910488 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:34.924274+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 884736 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:35.924396+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 860160 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:36.924533+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 860160 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:37.924671+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 860160 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:38.924811+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 851968 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910488 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:39.924933+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 851968 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:40.925073+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 835584 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:41.925190+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2821000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 835584 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:42.925326+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 827392 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:43.925453+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 827392 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910488 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:44.925593+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 827392 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:45.925719+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 802816 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:46.925848+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 802816 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:47.926002+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.606834412s of 38.651565552s, submitted: 64
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 794624 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:48.926122+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 794624 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909897 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:49.926260+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 786432 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:50.926429+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 778240 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:51.926578+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 778240 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:52.926756+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 770048 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:53.926880+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 770048 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:54.927027+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 761856 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:55.927179+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 753664 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:56.927339+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 753664 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:57.927492+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 745472 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:58.927613+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 745472 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:59.927770+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 737280 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:00.927918+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 729088 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:01.928013+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 729088 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:02.928160+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 720896 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:03.928263+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 720896 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:04.928403+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 712704 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:05.928524+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 696320 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:06.928640+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 688128 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:07.928762+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 688128 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:08.928865+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 688128 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:09.929005+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 679936 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:10.929105+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 679936 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:11.929197+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 663552 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:12.929322+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 663552 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:13.929443+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 663552 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:14.929580+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 655360 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:15.929735+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 655360 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:16.929832+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 647168 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:17.929922+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 647168 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:18.930022+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 647168 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:19.930118+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 638976 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:20.930209+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 638976 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:21.930306+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 630784 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:22.930493+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 630784 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:23.930660+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 622592 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:24.930771+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 622592 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:25.930882+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 606208 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:26.930981+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 598016 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:27.931071+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 598016 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:28.931159+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 589824 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:29.931252+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d1e0d800 session 0x5584d129e5a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 589824 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:30.931338+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 589824 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:31.931494+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 581632 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:32.931664+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 581632 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:33.931849+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 573440 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:34.932001+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 573440 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:35.932142+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 573440 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:36.932241+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 565248 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:37.932366+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 565248 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:38.932478+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 557056 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:39.932574+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 557056 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:40.932693+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 557056 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:41.932801+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 548864 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:42.932960+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 548864 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:43.933108+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 540672 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:44.933281+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 540672 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:45.933405+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 532480 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:46.933544+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 532480 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:47.933686+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 524288 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:48.933808+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 524288 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:49.933903+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 62.074703217s of 62.080329895s, submitted: 4
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 516096 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:50.934039+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 507904 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:51.934138+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:52.934263+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 507904 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:53.934394+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 499712 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:54.934512+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 499712 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:55.934672+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 499712 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:56.935303+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 483328 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:57.935447+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 483328 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:58.935593+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 475136 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:59.935745+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 475136 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:00.936118+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 466944 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:01.936216+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 466944 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:02.936318+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 466944 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:03.936441+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 458752 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:04.936576+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 458752 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:05.936700+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 450560 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:06.936824+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 450560 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:07.936980+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 450560 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:08.937137+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 442368 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:09.937275+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 442368 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:10.937403+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 434176 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:11.937517+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 434176 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:12.937692+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 434176 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:13.937808+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 425984 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:14.937934+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 425984 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:15.938036+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 425984 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:16.938135+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 417792 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:17.938250+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 409600 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:18.938354+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 409600 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:19.938457+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 409600 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:20.938553+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 401408 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:21.938722+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 401408 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:22.938910+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 401408 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:23.939028+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 393216 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:24.939156+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 393216 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:25.939285+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 385024 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:26.939455+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 385024 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:27.939566+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 376832 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:28.939674+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 376832 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:29.939784+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 376832 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:30.939922+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 368640 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:31.940057+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 368640 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:32.940205+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 360448 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:33.940329+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 360448 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:34.940502+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 352256 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:35.940671+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 352256 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:36.940788+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 352256 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:37.940950+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 344064 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:38.941066+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 344064 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:39.941163+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 335872 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:40.941284+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 335872 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:41.941443+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 327680 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:42.941585+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 327680 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:43.941737+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 327680 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:44.941854+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 319488 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:45.941978+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 319488 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:46.942457+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 319488 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:47.942586+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 311296 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:48.942694+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 311296 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:49.942794+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 303104 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:50.942898+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 294912 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:51.943009+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 286720 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:52.943159+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 286720 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:53.943278+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 286720 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:54.943394+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 278528 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:55.943452+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 278528 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:56.943551+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 278528 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:57.943706+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 270336 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:58.943808+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 270336 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:59.943922+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 262144 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:00.944015+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 262144 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:01.944115+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 253952 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:02.944481+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 253952 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d2821000 session 0x5584d46b1c20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:03.944595+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 253952 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:04.944711+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 245760 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:05.944814+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 245760 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:06.944926+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 245760 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:07.945025+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 237568 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:08.945137+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 237568 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:09.945284+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 229376 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:10.945446+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 229376 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:11.945586+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 229376 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:12.945728+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 221184 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:13.945866+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 221184 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:14.946014+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 212992 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:15.946171+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 212992 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:16.946277+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 212992 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 87.239952087s of 87.242134094s, submitted: 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:17.946385+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 204800 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:18.946450+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 204800 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:19.946555+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 196608 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:20.946665+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 180224 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:21.946801+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 172032 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:22.946961+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 172032 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:23.947068+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 172032 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:24.947210+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 163840 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:25.947313+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 163840 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:26.947417+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 163840 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:27.947554+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 155648 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:28.947668+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 155648 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:29.947771+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 147456 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:30.947878+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 147456 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:31.947974+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 139264 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:32.948110+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 139264 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:33.948213+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 139264 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:34.948318+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 131072 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:35.948451+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 131072 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:36.948549+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 131072 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:37.948655+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 122880 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:38.948759+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 122880 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:39.948864+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 114688 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:40.948977+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 114688 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:41.949093+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 114688 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:42.949215+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 106496 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:43.949376+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 106496 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:44.949539+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 98304 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:45.949662+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 98304 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:46.949797+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 98304 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:47.949910+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 90112 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:48.950028+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 90112 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:49.950134+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 81920 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:50.950246+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 81920 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:51.950372+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 81920 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:52.950490+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 73728 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:53.950596+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 73728 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:54.950695+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 65536 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:55.950836+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 65536 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:56.950968+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 65536 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:57.951129+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 57344 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:58.951227+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 57344 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:59.951321+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 49152 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:00.951439+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 49152 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:01.951532+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 49152 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:02.951656+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 40960 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:03.951763+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 40960 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:04.951866+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 32768 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:05.951968+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 32768 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:06.952063+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 32768 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:07.952166+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 24576 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 51.442672729s of 51.444469452s, submitted: 1
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:08.952259+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 24576 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:09.952353+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 16384 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:10.952521+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 16384 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:11.952627+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 8192 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:12.952781+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 8192 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:13.952891+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 8192 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:14.952996+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 0 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:15.953095+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 0 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:16.953195+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 1040384 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:17.953312+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 1040384 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:18.953462+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 1040384 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:19.953572+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 1032192 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:20.953678+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 1032192 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:21.953784+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d2820800 session 0x5584d47974a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1024000 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:22.953902+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1024000 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:23.954006+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1015808 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:24.954104+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1015808 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:25.954222+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 1007616 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:26.954317+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 1007616 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:27.954436+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 1007616 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:28.954566+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 999424 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:29.954696+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 999424 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:30.954786+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 999424 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:31.954875+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 991232 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:32.954990+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 991232 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:33.955100+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 983040 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:34.955583+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 983040 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:35.955719+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:36.955844+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:37.955965+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:38.956127+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 966656 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:39.956239+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 966656 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:40.956341+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 966656 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:41.956463+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 958464 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:42.956593+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 958464 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:43.956692+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 950272 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:44.956785+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 950272 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:45.956939+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 942080 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:46.957051+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 942080 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:47.957162+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 942080 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:48.957260+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 925696 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:49.957357+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 925696 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:50.957486+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 917504 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:51.957593+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 917504 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:52.957713+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 917504 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:53.957828+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 909312 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:54.957942+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 909312 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:55.958059+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 901120 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:56.958173+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 901120 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:57.958282+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 901120 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 6386 writes, 26K keys, 6386 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6386 writes, 1197 syncs, 5.34 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6386 writes, 26K keys, 6386 commit groups, 1.0 writes per commit group, ingest: 19.54 MB, 0.03 MB/s
                                           Interval WAL: 6386 writes, 1197 syncs, 5.34 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:58.958391+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 811008 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:59.958461+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 811008 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:00.958563+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 811008 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:01.958663+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 802816 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:02.958778+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 802816 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:03.958880+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 794624 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:04.958990+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 794624 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:05.959089+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 786432 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:06.959185+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 786432 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:07.959288+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 786432 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:08.959471+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 778240 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:09.959825+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 778240 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:10.959933+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 770048 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:11.960256+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 761856 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:12.960384+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 761856 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:13.960494+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 761856 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:14.960604+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 753664 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:15.960706+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 753664 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:16.960905+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 745472 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:17.961071+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 745472 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:18.961231+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:19.961352+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:20.961471+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:21.961610+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:22.961741+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:23.961866+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:24.961972+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 720896 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:25.962103+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 720896 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:26.962198+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 712704 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:27.962314+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 712704 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:28.962448+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 704512 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:29.962555+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 704512 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:30.962722+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 704512 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:31.962866+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 696320 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:32.963021+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 696320 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:33.963137+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 688128 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:34.963266+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 688128 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:35.963544+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 679936 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:36.963677+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 679936 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:37.963859+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 679936 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:38.964018+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 679936 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:39.964186+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:40.964355+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:41.964491+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:42.964631+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:43.964802+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:44.964922+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:45.965052+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:46.965219+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:47.965343+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:48.965452+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:49.965558+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 638976 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:50.965659+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 638976 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:51.965757+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 638976 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:52.965868+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:53.965981+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:54.966099+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:55.966211+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:56.966322+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:57.966449+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:58.966557+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:59.966662+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:00.966760+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:01.966856+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:02.966964+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:03.967065+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:04.967154+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:05.967250+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 573440 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:06.967381+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 573440 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 119.153465271s of 119.154769897s, submitted: 1
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:07.967433+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:08.967550+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 1294336 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:09.968524+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 1294336 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:10.968691+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 1294336 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:11.968857+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 1294336 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:12.969039+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 1294336 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:13.969153+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 1286144 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:14.969328+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 1253376 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:15.969451+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 1253376 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:16.969557+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 1253376 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:17.969649+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1245184 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:18.969772+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1245184 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:19.969876+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1245184 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:20.969969+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1245184 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:21.970070+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1245184 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:22.970191+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1245184 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:23.970294+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 1236992 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:24.970396+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 1236992 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:25.970537+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 1228800 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:26.970687+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 1228800 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:27.970829+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 1228800 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:28.970959+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 1220608 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:29.971111+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 1220608 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:30.971253+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 1212416 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:31.971406+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 1204224 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:32.971600+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 1204224 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:33.971748+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 1204224 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:34.971892+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 1196032 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:35.972041+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 1196032 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:36.972215+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 1179648 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:37.972341+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 1179648 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:38.972476+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 1179648 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:39.972618+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 1171456 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:40.972758+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 1171456 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:41.972899+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 1163264 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:42.973060+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 1163264 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:43.973221+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 1155072 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:44.973389+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 1155072 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:45.973553+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 1155072 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:46.973712+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 1146880 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:47.973826+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 1146880 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:48.973931+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 1138688 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:49.974032+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 1138688 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:50.974173+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 1138688 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:51.974346+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 1130496 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:52.974471+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 1122304 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:53.974599+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 1114112 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:54.974742+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 1114112 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:55.974837+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 1114112 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:56.974939+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1105920 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:57.975039+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1105920 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:58.975151+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1105920 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:59.975299+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1105920 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:00.975406+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1105920 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:01.975550+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1105920 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:02.975664+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1105920 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:03.975860+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 1089536 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:04.975979+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 1089536 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:05.976133+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 1081344 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:06.976233+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 1081344 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:07.976366+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 1081344 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:08.976452+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 1081344 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:09.976566+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 1081344 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:10.976672+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 1073152 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:11.976777+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 1073152 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:12.976913+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 1073152 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:13.977013+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 1073152 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:14.977141+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 1073152 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:15.977303+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1056768 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:16.977442+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1056768 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:17.977538+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1056768 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:18.977641+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1056768 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:19.977743+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1056768 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:20.977851+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:21.977967+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:22.978111+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:23.978240+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:24.978344+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:25.978450+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:26.978552+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:27.978646+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:28.978943+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:29.979031+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:30.979161+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:31.979264+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:32.979395+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:33.979555+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:34.979660+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:35.979756+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:36.979921+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:37.980075+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:38.980180+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:39.980292+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:40.980430+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:41.980536+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:42.980662+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:43.980765+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:44.980935+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:45.981026+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:46.981119+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:47.981208+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:48.981303+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 101.919792175s of 102.060813904s, submitted: 258
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:49.981441+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:50.981558+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:51.981703+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:52.981868+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:53.982032+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:54.982161+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:55.982275+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:56.982403+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:57.982519+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:58.982627+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:59.982738+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:00.982871+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:01.982975+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:02.983109+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:03.983225+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:04.983342+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:05.983455+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:06.983551+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:07.983657+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:08.983766+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:09.983868+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:10.983979+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:11.984080+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:12.984219+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:13.984336+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:14.984445+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:15.984546+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:16.984715+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:17.984866+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:18.984996+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:19.985122+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:20.985281+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:21.985463+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:22.985621+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:23.985781+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:24.985905+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:25.986016+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:26.986179+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:27.986284+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:28.986395+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:29.986514+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:30.986627+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:31.986722+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:32.986869+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:33.986991+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:34.987117+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:35.987268+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:36.987377+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:37.987508+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:38.987619+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:39.987738+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:40.987865+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:41.987971+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:42.988121+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:43.988246+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:44.988384+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:45.988453+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:46.988606+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:47.988731+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:48.988861+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:49.989735+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:50.989840+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:51.989966+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:52.990115+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:53.990229+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:54.990327+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:55.990432+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:56.990534+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:57.990634+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:58.990727+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:59.990828+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:00.990927+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:01.991064+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:02.991209+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:03.991312+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:04.991442+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:05.991584+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:06.991670+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:07.992724+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:08.992821+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:09.992925+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:10.993045+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:11.993187+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:12.993349+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:13.993484+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:14.993579+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:15.993670+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:16.993787+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:17.993917+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:18.994032+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:19.994134+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:20.994237+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:21.994347+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1fc00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 92.995887756s of 92.997634888s, submitted: 1
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:22.994472+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:23.994580+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:24.994713+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915093 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:25.994813+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:26.994915+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:27.995487+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:28.995666+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:29.995759+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914502 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:30.995872+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:31.995977+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:32.996091+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:33.996186+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:34.996284+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914502 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:35.996388+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:36.996467+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:37.996559+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:38.996646+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:39.996746+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914502 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:40.996835+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.800531387s of 18.803356171s, submitted: 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:41.996928+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:42.997037+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:43.997128+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:44.997223+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916014 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:45.997313+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:46.997400+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:47.997504+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:48.997603+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:49.997718+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:50.997815+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915423 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:51.997921+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d1b1fc00 session 0x5584d4ed5860
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:52.998135+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:53.998241+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:54.998384+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:55.998541+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915423 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:56.998685+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:57.998798+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:58.998928+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:59.999035+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:00.999145+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915423 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:01.999276+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:02.999392+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:03.999521+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:04.999657+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:05.999758+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915423 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:06.999907+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:08.000005+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.186798096s of 27.189485550s, submitted: 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:09.000101+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:10.000204+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:11.000300+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916935 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:12.000395+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:13.000529+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:14.000656+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:15.000797+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:16.000947+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916344 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:17.001049+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:18.001157+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:19.001316+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:20.001432+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d1e0d800 session 0x5584d4ed4b40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:21.001548+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916344 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:22.001642+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:23.001756+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:24.001857+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:25.001962+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:26.002036+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916344 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:27.002128+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:28.002223+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:29.002359+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:30.002494+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:31.002650+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916344 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:32.002786+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:33.002950+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:34.003056+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:35.003161+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:36.003272+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 909312 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916344 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:37.003352+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 909312 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1f17000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.246471405s of 28.250848770s, submitted: 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:38.003450+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 909312 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:39.003554+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 909312 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:40.003654+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 909312 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:41.003745+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917856 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:42.003843+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:43.003956+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:44.004057+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:45.004191+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:46.004298+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:47.004404+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:48.004534+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:49.004658+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:50.004795+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:51.004909+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:52.005058+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:53.005215+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:54.005384+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:55.005540+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:56.005668+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:57.005770+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:58.005944+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:59.006071+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:00.006223+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:01.006332+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:02.006849+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:03.006985+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:04.007107+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:05.007215+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:06.007324+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:07.007458+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:08.007602+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:09.007734+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:10.007868+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:11.007968+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:12.008102+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d2820400 session 0x5584d4902780
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:13.008216+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:14.008320+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:15.008448+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:16.008576+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:17.008710+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:18.008842+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:19.008988+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:20.009117+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:21.009231+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:22.009356+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:23.009465+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:24.009591+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:25.009693+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:26.009812+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 49.348262787s of 49.350105286s, submitted: 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:27.009949+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:28.010072+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:29.010195+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:30.010319+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:31.010455+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920289 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:32.010590+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:33.010778+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:34.010909+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:35.011114+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:36.011268+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:37.011457+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:38.011591+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:39.011715+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:40.011858+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:41.012016+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:42.012160+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:43.012351+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:44.012444+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:45.012596+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:46.012689+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:47.013158+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:48.013319+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:49.013481+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:50.013615+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:51.013768+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:52.013899+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:53.014043+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:54.014147+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:55.014272+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:56.014400+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:57.014523+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:58.014637+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:59.014754+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:00.014904+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:01.015028+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:02.015189+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:03.015373+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:04.015486+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:05.015616+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:06.015749+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:07.015885+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:08.016040+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:09.016176+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:10.016308+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:11.016487+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:12.016615+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:13.016779+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:14.016907+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:15.017056+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: mgrc ms_handle_reset ms_handle_reset con 0x5584d133d400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/92811439
Nov 25 10:03:01 compute-1 ceph-osd[77354]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/92811439,v1:192.168.122.100:6801/92811439]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: get_auth_request con 0x5584d1b1fc00 auth_method 0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: mgrc handle_mgr_configure stats_period=5
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:16.017169+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d1e0d000 session 0x5584d1e73a40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1753400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:17.017301+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:18.017439+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:19.017597+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:20.017733+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:21.017866+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:22.018010+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d1f17000 session 0x5584d1e73680
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:23.018164+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:24.018270+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:25.018431+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:26.018587+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:27.018718+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:28.018884+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:29.019006+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:30.019141+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:31.019273+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:32.019403+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:33.019581+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:34.019722+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:35.019832+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:36.019959+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:37.020110+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:38.020235+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:39.020385+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d2820800 session 0x5584d49023c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:40.020512+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:41.020625+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:42.020742+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:43.020870+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:44.021033+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:45.021176+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 78.668830872s of 78.672355652s, submitted: 3
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:46.021323+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:47.021450+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:48.021606+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:49.021720+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:50.021823+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:51.022001+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:52.022156+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:53.022336+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:54.022473+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:55.022641+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:56.022805+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:57.022944+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:58.023100+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:59.023271+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:00.023454+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:01.023583+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:02.023724+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:03.023885+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:04.024045+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:05.024181+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:06.024314+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:07.024455+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:08.024787+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:09.024951+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:10.025080+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:11.025237+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:12.025403+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:13.025578+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:14.025737+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:15.025893+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:16.026056+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 679936 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:17.026205+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:18.026351+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:19.026488+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:20.026635+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:21.026783+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:22.026894+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:23.027060+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:24.027188+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:25.027342+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:26.027503+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:27.027660+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:28.027781+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:29.027894+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:30.028051+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:31.028200+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:32.028325+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:33.028485+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:34.028623+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:35.028762+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 50.528675079s of 50.530040741s, submitted: 1
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:36.028891+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:37.029023+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:38.029159+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:39.029305+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:40.029446+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:41.029578+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 663552 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:42.029746+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 663552 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:43.029893+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:44.030059+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:45.030221+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:46.030315+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:47.030468+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:48.030622+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:49.031336+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:50.031437+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:51.031583+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:52.031705+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:53.031876+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:54.032040+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:55.032193+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:56.032348+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:57.032508+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:58.032666+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:59.032821+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:00.032955+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:01.033115+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:02.033269+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:03.033444+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:04.033583+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:05.033710+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:06.033873+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:07.034033+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:08.034196+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:09.034304+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:10.034457+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:11.034558+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:12.034717+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:13.034967+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:14.035147+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:15.035340+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:16.035529+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:17.035704+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:18.035930+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:19.036124+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:20.036272+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.554691315s of 44.555805206s, submitted: 1
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1d57400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:21.036387+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 17145856 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034365 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 138 ms_handle_reset con 0x5584d1d57400 session 0x5584d4f2e960
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:22.036512+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 17014784 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _renew_subs
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 140 ms_handle_reset con 0x5584d1e0d800 session 0x5584d4f2f0e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fad9a000/0x0/0x4ffc00000, data 0x1ddd005/0x1e80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:23.036682+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 16982016 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:24.036836+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:25.037016+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad91000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:26.037202+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131657 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:27.037382+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:28.037531+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad91000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:29.037656+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad91000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad91000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:30.037817+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 ms_handle_reset con 0x5584d1e0d000 session 0x5584d1f23680
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:31.037955+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131657 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:32.038092+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad91000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:33.038273+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1f17000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.414980888s of 13.458241463s, submitted: 60
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:34.038388+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:35.038537+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad91000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:36.038690+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 16916480 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133169 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:37.038842+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 16908288 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:38.038968+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 16908288 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:39.039131+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 16908288 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:40.039284+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 16900096 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:41.039469+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 16900096 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131213 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:42.039625+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 16900096 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:43.039759+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 16900096 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:44.039892+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 16900096 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 ms_handle_reset con 0x5584d1f17000 session 0x5584d46ae780
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:45.039997+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:46.040140+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131213 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:47.040244+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.465446472s of 13.467451096s, submitted: 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:48.040395+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:49.040550+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:50.040689+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:51.040828+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132725 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:52.040949+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:53.041103+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:54.041260+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:55.041380+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:56.041523+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132725 data_alloc: 218103808 data_used: 212992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:57.041677+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 ms_handle_reset con 0x5584d2820800 session 0x5584d4ed5e00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1f16c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 ms_handle_reset con 0x5584d1f16c00 session 0x5584d4ed4000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 ms_handle_reset con 0x5584d1e0d000 session 0x5584d4f5a1e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 16875520 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:58.041803+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.333181381s of 11.334502220s, submitted: 1
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 8929280 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 7032 writes, 27K keys, 7032 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7032 writes, 1507 syncs, 4.67 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 646 writes, 1251 keys, 646 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s
                                           Interval WAL: 646 writes, 310 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 ms_handle_reset con 0x5584d1e0d800 session 0x5584d4f5a5a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:59.041903+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1f17000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 ms_handle_reset con 0x5584d1f17000 session 0x5584d429e780
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 95150080 unmapped: 2334720 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:00.042047+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 95150080 unmapped: 2334720 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:01.042193+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _renew_subs
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d2820800 session 0x5584d429e5a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d2145800 session 0x5584d2c905a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d1e0d000 session 0x5584d129e5a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d1e0d800 session 0x5584d1f232c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1f17000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d1f17000 session 0x5584d27174a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 95715328 unmapped: 4931584 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214838 data_alloc: 234881024 data_used: 13844480
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:02.042329+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac5a000/0x0/0x4ffc00000, data 0x1f17341/0x1fc0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d2820800 session 0x5584d2716b40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 95715328 unmapped: 4931584 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:03.042485+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 95715328 unmapped: 4931584 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:04.042582+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d2145400 session 0x5584d4703e00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d1e0d000 session 0x5584d47021e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d1e0d800 session 0x5584d42a03c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 95617024 unmapped: 5029888 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:05.042699+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1f17000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 4988928 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa95b000/0x0/0x4ffc00000, data 0x2217364/0x22c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:06.042802+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa95b000/0x0/0x4ffc00000, data 0x2217364/0x22c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98615296 unmapped: 2031616 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235935 data_alloc: 234881024 data_used: 17027072
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:07.042931+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:08.043074+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:09.043205+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:10.043326+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.486788750s of 11.513138771s, submitted: 33
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:11.043466+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa957000/0x0/0x4ffc00000, data 0x2219336/0x22c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239062 data_alloc: 234881024 data_used: 17031168
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:12.043613+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa957000/0x0/0x4ffc00000, data 0x2219336/0x22c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:13.043770+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:14.043892+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:15.044001+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa957000/0x0/0x4ffc00000, data 0x2219336/0x22c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 101474304 unmapped: 1269760 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:16.044143+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266752 data_alloc: 234881024 data_used: 17367040
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:17.044280+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:18.044439+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:19.044586+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa7aa000/0x0/0x4ffc00000, data 0x23be336/0x2469000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:20.044888+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:21.045008+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266752 data_alloc: 234881024 data_used: 17367040
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:22.045147+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa7aa000/0x0/0x4ffc00000, data 0x23be336/0x2469000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.168714523s of 12.192200661s, submitted: 43
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa7aa000/0x0/0x4ffc00000, data 0x23be336/0x2469000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:23.045276+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:24.045403+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:25.045528+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:26.045688+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa7aa000/0x0/0x4ffc00000, data 0x23be336/0x2469000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa7aa000/0x0/0x4ffc00000, data 0x23be336/0x2469000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266768 data_alloc: 234881024 data_used: 17367040
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:27.045833+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa7aa000/0x0/0x4ffc00000, data 0x23be336/0x2469000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:28.045995+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:29.046114+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:30.046257+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:31.046363+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266768 data_alloc: 234881024 data_used: 17367040
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:32.046498+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa7aa000/0x0/0x4ffc00000, data 0x23be336/0x2469000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:33.046646+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:34.046781+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145000 session 0x5584d4f5b0e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2144c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2144c00 session 0x5584d4f5af00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2144800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2144800 session 0x5584d2717c20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103481344 unmapped: 311296 heap: 103792640 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d4b60b40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:35.046888+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d800 session 0x5584d471de00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2144c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.144062996s of 13.145666122s, submitted: 1
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2144c00 session 0x5584d4ed4000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145000 session 0x5584d4ed1680
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2144400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2144400 session 0x5584d4f5b860
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d4b614a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d800 session 0x5584d4903860
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 10338304 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:36.046992+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 10338304 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329516 data_alloc: 234881024 data_used: 18419712
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:37.047084+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9f79000/0x0/0x4ffc00000, data 0x2bf8336/0x2ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 10338304 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:38.047214+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2144c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2144c00 session 0x5584d46b1e00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9f79000/0x0/0x4ffc00000, data 0x2bf8336/0x2ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 10354688 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:39.047336+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145000 session 0x5584d47023c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 10354688 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:40.047594+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d48b5800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d48b5800 session 0x5584d4ed01e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d4ed1e00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 10338304 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:41.047692+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4b9a800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103096320 unmapped: 10280960 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336051 data_alloc: 234881024 data_used: 18481152
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:42.047793+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9f54000/0x0/0x4ffc00000, data 0x2c1c346/0x2cc8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [1])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9f54000/0x0/0x4ffc00000, data 0x2c1c346/0x2cc8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 2965504 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:43.047913+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9f54000/0x0/0x4ffc00000, data 0x2c1c346/0x2cc8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 2965504 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:44.048021+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 2965504 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:45.048164+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 2965504 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:46.048319+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 2965504 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391379 data_alloc: 234881024 data_used: 25964544
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:47.048439+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.915447235s of 11.942303658s, submitted: 23
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9f54000/0x0/0x4ffc00000, data 0x2c1c346/0x2cc8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 2965504 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:48.048534+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 2965504 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:49.048644+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9f52000/0x0/0x4ffc00000, data 0x2c1d346/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 2932736 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:50.048772+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 2932736 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:51.048907+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 1998848 heap: 120717312 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1474675 data_alloc: 234881024 data_used: 26361856
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8470000/0x0/0x4ffc00000, data 0x3560346/0x360c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:52.049002+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120733696 unmapped: 1032192 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:53.049103+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82d7000/0x0/0x4ffc00000, data 0x36f8346/0x37a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 999424 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:54.049227+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82d7000/0x0/0x4ffc00000, data 0x36f8346/0x37a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 991232 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:55.049382+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 991232 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:56.049531+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 983040 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491945 data_alloc: 234881024 data_used: 26857472
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:57.049685+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.966114044s of 10.044518471s, submitted: 112
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 3186688 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:58.049814+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 3186688 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:59.049940+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82b7000/0x0/0x4ffc00000, data 0x3719346/0x37c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 3186688 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:00.050038+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82b7000/0x0/0x4ffc00000, data 0x3719346/0x37c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 3186688 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:01.050207+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 3186688 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1482465 data_alloc: 234881024 data_used: 26857472
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:02.050368+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4b9a800 session 0x5584d4f2f860
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d47f0b40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bab000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bab000 session 0x5584d42a1e00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 9961472 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:03.050563+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 9961472 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:04.050701+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 9961472 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:05.050863+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9612000/0x0/0x4ffc00000, data 0x23bf336/0x246a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 9961472 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:06.051061+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 9961472 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1276728 data_alloc: 234881024 data_used: 17502208
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:07.051178+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1f17000 session 0x5584d4f2ed20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820800 session 0x5584d4f2ef00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.505298615s of 10.529612541s, submitted: 37
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 9953280 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:08.051309+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d4f2f860
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 14532608 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:09.051399+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108453888 unmapped: 14360576 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:10.051535+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108453888 unmapped: 14360576 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:11.051708+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108453888 unmapped: 14360576 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205070 data_alloc: 234881024 data_used: 13979648
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:12.051878+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108453888 unmapped: 14360576 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:13.052001+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108453888 unmapped: 14360576 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:14.052136+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 14352384 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:15.052283+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 14352384 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:16.052443+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 14352384 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205070 data_alloc: 234881024 data_used: 13979648
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:17.052544+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 14352384 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:18.052685+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:19.052825+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:20.052920+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d45c4780
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:21.053047+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205070 data_alloc: 234881024 data_used: 13979648
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:22.053195+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:23.053334+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:24.053453+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:25.053590+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:26.053733+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 14336000 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205070 data_alloc: 234881024 data_used: 13979648
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:27.053840+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 14336000 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:28.053969+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d47f2000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4b9a800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.747106552s of 20.908111572s, submitted: 283
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4b9a800 session 0x5584d44301e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d27161e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d4953e00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820800 session 0x5584d4953860
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110108672 unmapped: 12705792 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d471de00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:29.054072+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110108672 unmapped: 12705792 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:30.054213+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9668000/0x0/0x4ffc00000, data 0x1f5a313/0x2004000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110108672 unmapped: 12705792 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:31.054312+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bab000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bab000 session 0x5584d47f0b40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9668000/0x0/0x4ffc00000, data 0x1f5a313/0x2004000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110108672 unmapped: 12705792 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224133 data_alloc: 234881024 data_used: 14897152
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:32.054484+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110116864 unmapped: 12697600 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:33.054661+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d4f2e960
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9668000/0x0/0x4ffc00000, data 0x1f5a313/0x2004000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110116864 unmapped: 12697600 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:34.054813+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d45d30e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d45d25a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 12673024 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:35.054932+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d47d0000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 13115392 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:36.055054+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 13115392 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238946 data_alloc: 234881024 data_used: 16240640
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:37.055177+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9667000/0x0/0x4ffc00000, data 0x1f5a322/0x2005000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 13115392 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:38.055325+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 13115392 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:39.055451+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 13107200 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:40.055571+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.456128120s of 11.477423668s, submitted: 26
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 13107200 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:41.055666+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:42.055783+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 13107200 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238355 data_alloc: 234881024 data_used: 16240640
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:43.055919+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 13107200 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9667000/0x0/0x4ffc00000, data 0x1f5a322/0x2005000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:44.056044+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 13107200 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:45.056146+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 6742016 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:46.056290+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 8585216 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:47.056399+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 8585216 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347403 data_alloc: 234881024 data_used: 17268736
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f88d8000/0x0/0x4ffc00000, data 0x2ce9322/0x2d94000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:48.056529+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 8585216 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:49.056636+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 8585216 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:50.056765+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 8585216 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:51.056876+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115294208 unmapped: 8577024 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f88d8000/0x0/0x4ffc00000, data 0x2ce9322/0x2d94000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.482524872s of 11.550142288s, submitted: 118
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:52.057035+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 8880128 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345067 data_alloc: 234881024 data_used: 17268736
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:53.057197+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 8880128 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:54.057344+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 8880128 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:55.057476+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 8880128 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f88d5000/0x0/0x4ffc00000, data 0x2cec322/0x2d97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:56.057611+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 8880128 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d4d845a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:57.057766+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345067 data_alloc: 234881024 data_used: 17268736
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:58.057893+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:59.058001+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:00.058179+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f88d5000/0x0/0x4ffc00000, data 0x2cec322/0x2d97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:01.058352+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:02.058512+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345067 data_alloc: 234881024 data_used: 17268736
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:03.058694+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:04.058818+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f88d5000/0x0/0x4ffc00000, data 0x2cec322/0x2d97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:05.058968+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 8847360 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:06.059145+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 8847360 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f88d5000/0x0/0x4ffc00000, data 0x2cec322/0x2d97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:07.059233+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 8847360 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345371 data_alloc: 234881024 data_used: 17276928
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.049795151s of 16.052217484s, submitted: 3
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820800 session 0x5584d39f81e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d47d0000 session 0x5584d429ef00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:08.059407+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 8847360 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d39ade00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:09.059632+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 10575872 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:10.059802+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 10575872 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:11.059971+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 10575872 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:12.060213+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 10575872 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218449 data_alloc: 234881024 data_used: 14897152
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:13.060392+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 10575872 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:14.060506+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 10575872 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:15.060646+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:16.060754+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:17.060891+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218449 data_alloc: 234881024 data_used: 14897152
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:18.061055+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:19.061302+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.493480682s of 11.520867348s, submitted: 50
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:20.061465+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:21.061625+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:22.061756+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217858 data_alloc: 234881024 data_used: 14897152
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:23.061901+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:24.062000+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:25.062173+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:26.062331+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:27.062447+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d47f50e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d4d85680
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4c17c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4c17c00 session 0x5584d44512c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217858 data_alloc: 234881024 data_used: 14897152
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d45d2f00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d47025a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d45c45a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d47d0000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d47d0000 session 0x5584d47f2960
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4c17800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4c17800 session 0x5584d42a25a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d287cd20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:28.062611+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 11575296 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:29.062720+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 11575296 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f959d000/0x0/0x4ffc00000, data 0x2024375/0x20cf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:30.062887+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 11542528 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:31.063050+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 11542528 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:32.063173+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 11542528 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248433 data_alloc: 234881024 data_used: 14897152
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f959d000/0x0/0x4ffc00000, data 0x2024375/0x20cf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:33.063927+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 11542528 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.898181915s of 13.919568062s, submitted: 34
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d3c56960
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d47d0000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:34.064035+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 11198464 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:35.064157+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113598464 unmapped: 11321344 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:36.064282+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:37.064477+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260815 data_alloc: 234881024 data_used: 15953920
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9578000/0x0/0x4ffc00000, data 0x2048398/0x20f4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:38.064573+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:39.064675+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:40.064786+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:41.064889+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9578000/0x0/0x4ffc00000, data 0x2048398/0x20f4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:42.065012+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260815 data_alloc: 234881024 data_used: 15953920
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:43.065166+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.019714355s of 10.027096748s, submitted: 11
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:44.065283+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 11157504 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f91fc000/0x0/0x4ffc00000, data 0x23b8398/0x2464000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:45.065390+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:46.065482+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:47.065585+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300051 data_alloc: 234881024 data_used: 16035840
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:48.065691+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:49.065789+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f91ee000/0x0/0x4ffc00000, data 0x23d2398/0x247e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:50.065906+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:51.066041+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f91ee000/0x0/0x4ffc00000, data 0x23d2398/0x247e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:52.066150+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300067 data_alloc: 234881024 data_used: 16035840
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d4953680
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:53.066301+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f91ee000/0x0/0x4ffc00000, data 0x23d2398/0x247e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:54.066441+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:55.066582+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:56.066729+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:57.066897+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300067 data_alloc: 234881024 data_used: 16035840
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f91ee000/0x0/0x4ffc00000, data 0x23d2398/0x247e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:58.067083+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114434048 unmapped: 10485760 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:59.067194+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114434048 unmapped: 10485760 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:00.067296+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114434048 unmapped: 10485760 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:01.067387+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4c17400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.798303604s of 17.844646454s, submitted: 87
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115572736 unmapped: 9347072 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4c17400 session 0x5584d49523c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfc000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfc000 session 0x5584d47f30e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d3a20000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d47f01e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d49523c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:02.067511+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114475008 unmapped: 11493376 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330200 data_alloc: 234881024 data_used: 16035840
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f99000/0x0/0x4ffc00000, data 0x2627398/0x26d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:03.067700+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114483200 unmapped: 11485184 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:04.067882+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114483200 unmapped: 11485184 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:05.068042+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114483200 unmapped: 11485184 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f99000/0x0/0x4ffc00000, data 0x2627398/0x26d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:06.068188+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 11419648 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:07.068297+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4c17400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4c17400 session 0x5584d47f3680
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114532352 unmapped: 11436032 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330200 data_alloc: 234881024 data_used: 16035840
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfcc00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfcc00 session 0x5584d47f23c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:08.068404+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114532352 unmapped: 11436032 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f99000/0x0/0x4ffc00000, data 0x2627398/0x26d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d47f2960
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d47f32c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4c17400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:09.068554+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 11878400 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f98000/0x0/0x4ffc00000, data 0x26273bb/0x26d4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:10.068664+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 11968512 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:11.068812+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:12.068929+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341382 data_alloc: 234881024 data_used: 16977920
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f98000/0x0/0x4ffc00000, data 0x26273bb/0x26d4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:13.069106+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:14.069255+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f98000/0x0/0x4ffc00000, data 0x26273bb/0x26d4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:15.069370+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:16.069506+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d49025a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:17.069685+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341382 data_alloc: 234881024 data_used: 16977920
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f98000/0x0/0x4ffc00000, data 0x26273bb/0x26d4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f98000/0x0/0x4ffc00000, data 0x26273bb/0x26d4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:18.069818+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.518260956s of 17.544563293s, submitted: 39
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:19.069929+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 9822208 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:20.070042+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 11419648 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:21.070157+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 11329536 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:22.070265+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 11329536 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417958 data_alloc: 234881024 data_used: 17158144
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f853f000/0x0/0x4ffc00000, data 0x30803bb/0x312d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:23.070389+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 11329536 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:24.070518+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 11329536 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f853f000/0x0/0x4ffc00000, data 0x30803bb/0x312d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:25.070619+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 11329536 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:26.070715+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 11059200 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:27.070788+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 11059200 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417318 data_alloc: 234881024 data_used: 17158144
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d3eeb2c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4c17400 session 0x5584d3a21680
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:28.070885+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115867648 unmapped: 11157504 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d46af680
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:29.070992+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 12926976 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:30.071156+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 12926976 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.546496391s of 11.653998375s, submitted: 179
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f91ee000/0x0/0x4ffc00000, data 0x23d2398/0x247e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:31.071272+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 11837440 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d4450000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d47d0000 session 0x5584d287d2c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d287cd20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:32.071375+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 12533760 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246060 data_alloc: 234881024 data_used: 14766080
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:33.071633+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 12533760 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:34.071794+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f950f000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f950f000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:35.071917+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:36.072071+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:37.072192+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247572 data_alloc: 234881024 data_used: 14766080
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:38.072335+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:39.072438+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:40.072563+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:41.072673+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:42.072804+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244413 data_alloc: 234881024 data_used: 14766080
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:43.072938+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:44.073075+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:45.073183+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:46.073284+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:47.073402+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244413 data_alloc: 234881024 data_used: 14766080
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:48.073550+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:49.073642+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:50.073770+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:51.073888+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:52.074003+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244413 data_alloc: 234881024 data_used: 14766080
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:53.074180+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:54.074319+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:55.074443+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfdc00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfdc00 session 0x5584d46aef00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d4431860
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d4ed1860
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d4451e00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d47d0000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.848478317s of 24.885297775s, submitted: 68
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d47d0000 session 0x5584d3c57680
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 19963904 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfbc00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfbc00 session 0x5584d29e6f00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfbc00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfbc00 session 0x5584d28ed2c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d42a1c20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d4ed5680
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:56.074586+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 19963904 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:57.074689+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115466240 unmapped: 19955712 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315631 data_alloc: 234881024 data_used: 14766080
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f51000/0x0/0x4ffc00000, data 0x2670323/0x271b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d1e72780
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:58.075167+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d47d0000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d47d0000 session 0x5584d457f2c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115466240 unmapped: 19955712 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d2bd45a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d4ed0f00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:59.075285+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfbc00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 19652608 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:00.075388+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 16015360 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:01.075519+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 16015360 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f2d000/0x0/0x4ffc00000, data 0x2694323/0x273f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:02.075675+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 15982592 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370251 data_alloc: 234881024 data_used: 19406848
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2891068716' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:03.075824+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 15982592 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:04.075952+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 15982592 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:05.076111+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 15982592 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:06.076286+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 15982592 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:07.076468+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 15982592 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370251 data_alloc: 234881024 data_used: 19406848
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f2d000/0x0/0x4ffc00000, data 0x2694323/0x273f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:08.076654+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119455744 unmapped: 15966208 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f2d000/0x0/0x4ffc00000, data 0x2694323/0x273f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.643066406s of 13.664609909s, submitted: 19
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:09.076776+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 9805824 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2ca8323/0x2d53000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:10.076945+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125984768 unmapped: 9437184 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:11.077096+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125984768 unmapped: 9437184 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:12.077331+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125984768 unmapped: 9437184 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430737 data_alloc: 234881024 data_used: 19668992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:13.077480+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125984768 unmapped: 9437184 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:14.077620+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125984768 unmapped: 9437184 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:15.077733+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125992960 unmapped: 9428992 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:16.077868+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125992960 unmapped: 9428992 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:17.078041+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125992960 unmapped: 9428992 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430753 data_alloc: 234881024 data_used: 19668992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:18.078240+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125992960 unmapped: 9428992 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:19.078454+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 9412608 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:20.078919+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126017536 unmapped: 9404416 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:21.079021+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 9396224 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:22.079168+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 9396224 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430753 data_alloc: 234881024 data_used: 19668992
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:23.079680+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 9363456 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:24.079835+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 9363456 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:25.079990+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 9363456 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:26.080124+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 9363456 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:27.080281+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 9363456 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430905 data_alloc: 234881024 data_used: 19673088
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:28.080442+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 9355264 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:29.080545+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfac00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfac00 session 0x5584d3eec3c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd400 session 0x5584d49534a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd800 session 0x5584d4796b40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 9355264 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d46b03c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.557857513s of 20.602546692s, submitted: 80
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd400 session 0x5584d3a21a40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfc800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfc800 session 0x5584d47f25a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d4431860
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d59c4400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d59c4400 session 0x5584d2bd5860
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d39ac5a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:30.080672+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f867b000/0x0/0x4ffc00000, data 0x2f45333/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 10158080 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:31.080770+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 10158080 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:32.080901+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 10158080 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1444859 data_alloc: 234881024 data_used: 19673088
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f867b000/0x0/0x4ffc00000, data 0x2f45333/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f867b000/0x0/0x4ffc00000, data 0x2f45333/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:33.081042+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 10158080 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d4ed5a40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:34.081182+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfc800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfc800 session 0x5584d4f2f0e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 10158080 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd400 session 0x5584d3a23e00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d59c4c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d59c4c00 session 0x5584d44310e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:35.081287+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125009920 unmapped: 10412032 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:36.081382+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f867a000/0x0/0x4ffc00000, data 0x2f45343/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 8732672 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:37.081442+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 8732672 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464457 data_alloc: 234881024 data_used: 22360064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:38.081551+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 8732672 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:39.081656+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 8732672 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:40.081789+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 8732672 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.953464508s of 10.961739540s, submitted: 6
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:41.081889+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 8732672 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f867a000/0x0/0x4ffc00000, data 0x2f45343/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:42.082027+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 8691712 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464121 data_alloc: 234881024 data_used: 22360064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:43.082232+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 8691712 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:44.082356+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f867a000/0x0/0x4ffc00000, data 0x2f45343/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:45.082456+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 127205376 unmapped: 8216576 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f83b5000/0x0/0x4ffc00000, data 0x320a343/0x32b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:46.082548+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 127238144 unmapped: 8183808 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:47.082674+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 127238144 unmapped: 8183808 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1525997 data_alloc: 234881024 data_used: 22642688
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:48.082803+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 8175616 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:49.082931+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 8175616 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:50.083018+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 8175616 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:51.083115+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 8167424 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:52.083211+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.735983849s of 11.772041321s, submitted: 44
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 8716288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522799 data_alloc: 234881024 data_used: 22642688
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:53.083328+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 8716288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:54.083436+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 8675328 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:55.083585+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 8675328 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:56.083690+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 8675328 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:57.083817+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 8675328 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522799 data_alloc: 234881024 data_used: 22642688
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:58.083948+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 8667136 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:59.084091+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 8667136 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:00.084254+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 8667136 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:01.084365+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 8667136 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:02.084491+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522799 data_alloc: 234881024 data_used: 22642688
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:03.084645+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:04.084806+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:05.084942+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:06.085076+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:07.085210+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522799 data_alloc: 234881024 data_used: 22642688
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:08.085345+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:09.085451+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:10.085576+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:11.085730+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:12.085906+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522799 data_alloc: 234881024 data_used: 22642688
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:13.086035+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:14.086208+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:15.086331+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:16.086509+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.080341339s of 24.082492828s, submitted: 2
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:17.086617+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523103 data_alloc: 234881024 data_used: 22642688
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:18.086752+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:19.086856+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:20.086990+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:21.087080+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:22.087239+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523103 data_alloc: 234881024 data_used: 22642688
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:23.087378+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:24.087522+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126787584 unmapped: 8634368 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:25.087693+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126787584 unmapped: 8634368 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:26.087806+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:27.087940+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523103 data_alloc: 234881024 data_used: 22642688
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:28.088066+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:29.088284+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:30.088474+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:31.088610+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:32.088791+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523103 data_alloc: 234881024 data_used: 22642688
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:33.088952+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:34.089109+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:35.089258+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:36.089389+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:37.089544+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523103 data_alloc: 234881024 data_used: 22642688
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:38.089666+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126812160 unmapped: 8609792 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:39.089799+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126812160 unmapped: 8609792 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:40.090182+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 8601600 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:41.090286+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 8601600 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:42.090366+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 8601600 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523103 data_alloc: 234881024 data_used: 22642688
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:43.090563+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 8601600 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:44.090722+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 8601600 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:45.090855+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 8601600 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:46.090968+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 8601600 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d2716960
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d4ed5e00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfc800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.627803802s of 30.629236221s, submitted: 1
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:47.091090+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfc800 session 0x5584d47f3c20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 10764288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429886 data_alloc: 234881024 data_used: 19673088
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:48.091201+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 10764288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f890f000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:49.091315+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 10764288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:50.091482+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 10764288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:51.091604+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 10764288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:52.091750+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f890f000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d471c1e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfbc00 session 0x5584d41ee3c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 10764288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429886 data_alloc: 234881024 data_used: 19673088
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f890f000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d44314a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:53.091895+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:54.092020+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:55.092180+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:56.092332+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:57.092491+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262508 data_alloc: 234881024 data_used: 11358208
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:58.092645+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:59.092799+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:00.092932+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:01.093070+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:02.093225+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262508 data_alloc: 234881024 data_used: 11358208
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d3c57860
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:03.093352+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:04.093492+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:05.093636+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:06.093775+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:07.093911+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262508 data_alloc: 234881024 data_used: 11358208
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:08.094053+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:09.094264+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:10.094407+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:11.094534+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:12.094663+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d47f45a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d47f4b40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfbc00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfbc00 session 0x5584d47f4d20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d47f4960
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.468971252s of 25.488265991s, submitted: 34
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d3eea780
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d41eed20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d41eef00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfc800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfc800 session 0x5584d3c56d20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d3eeba40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373069 data_alloc: 234881024 data_used: 11358208
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:13.094774+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f89a2000/0x0/0x4ffc00000, data 0x2c1e385/0x2cca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:14.094928+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:15.095077+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f89a2000/0x0/0x4ffc00000, data 0x2c1e385/0x2cca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:16.095244+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:17.095396+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373069 data_alloc: 234881024 data_used: 11358208
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:18.095576+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:19.095709+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f89a2000/0x0/0x4ffc00000, data 0x2c1e385/0x2cca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d3a205a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:20.095842+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 28196864 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:21.095953+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126484480 unmapped: 19505152 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:22.096116+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126484480 unmapped: 19505152 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1477502 data_alloc: 234881024 data_used: 25718784
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:23.096273+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f897d000/0x0/0x4ffc00000, data 0x2c423a8/0x2cef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126484480 unmapped: 19505152 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:24.096879+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126484480 unmapped: 19505152 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d3c56780
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d3c55860
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:25.097041+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.703448296s of 12.744194031s, submitted: 43
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd400 session 0x5584d39ac5a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:26.097149+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9757000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:27.097268+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9757000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273728 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:28.097397+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9757000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:29.097547+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9757000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:30.097710+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:31.097817+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:32.097937+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273728 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d42a2000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d46b05a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d3a1c960
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d4430d20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:33.098053+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d59c4800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d59c4800 session 0x5584d471de00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d29e70e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d46afa40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d4f2f4a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d4953a40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d3eede00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 28483584 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:34.098217+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 28483584 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:35.098334+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 28483584 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:36.098522+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 28483584 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:37.098629+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 28483584 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363781 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:38.098766+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d46af680
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 28483584 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:39.098935+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 28483584 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:40.099063+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117784576 unmapped: 28205056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:41.099147+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 23552000 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:42.099306+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 23552000 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454373 data_alloc: 234881024 data_used: 23724032
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:43.099496+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d46aef00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d44314a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 122470400 unmapped: 23519232 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.636554718s of 18.685222626s, submitted: 63
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d3a21c20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:44.099585+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29573120 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:45.099751+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97da000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29573120 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:46.099880+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29573120 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:47.100012+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29573120 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278553 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:48.100164+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97da000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29573120 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:49.100331+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29573120 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4c16000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:50.100471+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:51.100595+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:52.100801+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97da000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280065 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:53.100998+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:54.101172+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:55.101340+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97da000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:56.101445+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:57.101551+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280065 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:58.101684+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97da000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:59.101820+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:00.101953+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:01.102085+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:02.102220+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97da000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280065 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:03.102367+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4c17400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4c17400 session 0x5584d47f34a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d47f2b40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d47f2780
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d47f25a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.483383179s of 19.503284454s, submitted: 22
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d4f2fa40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160c00 session 0x5584d287c000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d457f2c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d457e3c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d4ed4000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 29270016 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:04.102504+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 29270016 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:05.102639+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 29270016 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:06.102740+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fc5313/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fc5313/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 29270016 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:07.102865+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 29270016 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303707 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:08.103000+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d4ed52c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6161c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 29270016 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:09.103104+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:10.103264+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:11.103370+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fc5313/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:12.103514+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fc5313/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316323 data_alloc: 234881024 data_used: 12931072
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fc5313/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:13.103671+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:14.103810+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:15.103977+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fc5313/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:16.104145+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:17.104273+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316323 data_alloc: 234881024 data_used: 12931072
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:18.104373+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.134925842s of 15.145795822s, submitted: 11
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 24338432 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:19.104527+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8ddd000/0x0/0x4ffc00000, data 0x27e5313/0x288f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 24338432 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:20.104613+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:21.104729+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8dc3000/0x0/0x4ffc00000, data 0x27f7313/0x28a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:22.104955+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1383479 data_alloc: 234881024 data_used: 14221312
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:23.105116+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:24.105253+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:25.105443+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8dc3000/0x0/0x4ffc00000, data 0x27f7313/0x28a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:26.105601+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8dc3000/0x0/0x4ffc00000, data 0x27f7313/0x28a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:27.105752+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1383631 data_alloc: 234881024 data_used: 14225408
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:28.105882+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:29.106013+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:30.106152+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8dc3000/0x0/0x4ffc00000, data 0x27f7313/0x28a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160000 session 0x5584d3c57a40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6161c00 session 0x5584d3a22d20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:31.106260+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.536352158s of 12.585522652s, submitted: 84
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d47f2960
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:32.106383+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284656 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:33.106551+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:34.106676+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:35.106811+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:36.106941+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:37.107045+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284656 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:38.107179+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:39.107317+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:40.107450+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:41.107557+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:42.107693+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284656 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:43.107894+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:44.108059+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:45.108198+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread fragmentation_score=0.000153 took=0.000046s
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:46.108335+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 25731072 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:47.108444+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 25731072 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284656 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:48.108558+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 25731072 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:49.108695+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 25731072 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:50.108826+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 25731072 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:51.108919+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 25731072 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:52.109038+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.299646378s of 21.308881760s, submitted: 13
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d3c57e00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d3a223c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d46aef00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d46af680
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d4f2e960
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 25772032 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368141 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:53.109191+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 25772032 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:54.109335+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 25772032 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:55.109475+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a7e000/0x0/0x4ffc00000, data 0x2733375/0x27de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 25772032 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:56.109591+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 25763840 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:57.109688+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 25763840 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368141 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:58.109801+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a7e000/0x0/0x4ffc00000, data 0x2733375/0x27de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 25763840 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:59.109904+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d3c58780
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6161c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 25812992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:00.110006+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:01.110104+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a7d000/0x0/0x4ffc00000, data 0x2733398/0x27df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:02.110201+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426247 data_alloc: 234881024 data_used: 19419136
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:03.110352+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:04.110453+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:05.110551+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a7d000/0x0/0x4ffc00000, data 0x2733398/0x27df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:06.110652+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:07.110758+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426247 data_alloc: 234881024 data_used: 19419136
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:08.110858+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.777359962s of 15.817565918s, submitted: 53
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160800 session 0x5584d4f2fc20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d43b6c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d43b6c00 session 0x5584d47f43c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d39f83c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d4ed12c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d4ed0780
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f80f7000/0x0/0x4ffc00000, data 0x30b9398/0x3165000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123617280 unmapped: 25526272 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:09.110962+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 19357696 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:10.111060+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160800 session 0x5584d4ed0f00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 17154048 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:11.111169+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d43c1400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d43c1400 session 0x5584d4ed0000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d3eede00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d3eecb40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131997696 unmapped: 17145856 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:12.111359+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 14237696 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1638084 data_alloc: 234881024 data_used: 23384064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:13.111608+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7368000/0x0/0x4ffc00000, data 0x3e29398/0x3ed5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 8151040 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:14.111774+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 8151040 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:15.111873+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 8151040 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:16.111975+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 8617984 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:17.112079+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:18.112224+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 8617984 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1670212 data_alloc: 251658240 data_used: 29474816
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:19.112355+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 8609792 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7368000/0x0/0x4ffc00000, data 0x3e48398/0x3ef4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:20.112459+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 8609792 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7368000/0x0/0x4ffc00000, data 0x3e48398/0x3ef4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:21.112556+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 8609792 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.249376297s of 13.353449821s, submitted: 191
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:22.112657+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 145309696 unmapped: 3833856 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:23.112826+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 6389760 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1764866 data_alloc: 251658240 data_used: 30367744
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f6819000/0x0/0x4ffc00000, data 0x4997398/0x4a43000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:24.113012+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142958592 unmapped: 6184960 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:25.113253+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142958592 unmapped: 6184960 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:26.113489+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142958592 unmapped: 6184960 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f6801000/0x0/0x4ffc00000, data 0x49af398/0x4a5b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:27.113714+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142966784 unmapped: 6176768 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:28.113843+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142966784 unmapped: 6176768 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765330 data_alloc: 251658240 data_used: 30367744
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:29.113969+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142966784 unmapped: 6176768 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:30.114076+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142974976 unmapped: 6168576 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f67fe000/0x0/0x4ffc00000, data 0x49b2398/0x4a5e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:31.114184+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142974976 unmapped: 6168576 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:32.114372+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142974976 unmapped: 6168576 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f67fe000/0x0/0x4ffc00000, data 0x49b2398/0x4a5e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:33.115043+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142974976 unmapped: 6168576 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1763018 data_alloc: 251658240 data_used: 30367744
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:34.115159+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142974976 unmapped: 6168576 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f67fe000/0x0/0x4ffc00000, data 0x49b2398/0x4a5e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:35.115267+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142974976 unmapped: 6168576 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:36.115503+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142974976 unmapped: 6168576 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:37.115654+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142983168 unmapped: 6160384 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:38.115816+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142983168 unmapped: 6160384 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1763018 data_alloc: 251658240 data_used: 30367744
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d43f2d20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.842294693s of 16.914205551s, submitted: 108
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160800 session 0x5584d4952b40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4b9d000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4b9d000 session 0x5584d46afc20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:39.115959+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135061504 unmapped: 14082048 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7cdb000/0x0/0x4ffc00000, data 0x34d5398/0x3581000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:40.116107+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7cdb000/0x0/0x4ffc00000, data 0x34d5398/0x3581000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135061504 unmapped: 14082048 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:41.116248+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135061504 unmapped: 14082048 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6161c00 session 0x5584d3a1c000
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160400 session 0x5584d45d32c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:42.117593+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135102464 unmapped: 14041088 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d471da40
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:43.117726+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315640 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a29000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:44.117854+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:45.117957+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:46.118120+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:47.118665+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:48.118759+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315640 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a29000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:49.118914+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:50.119081+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:51.119217+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:52.119392+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a29000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:53.119542+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315640 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:54.119708+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a29000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:55.119846+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:56.119986+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a29000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:57.120081+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:58.120223+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315640 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:59.120401+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a29000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:00.120562+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d3a1cd20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d49521e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d3eec1e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d45d30e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.817098618s of 21.874700546s, submitted: 96
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160400 session 0x5584d3eec3c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6161c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6161c00 session 0x5584d4f2e780
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160800
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160800 session 0x5584d39acf00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d3a20d20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d472e960
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:01.120667+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 28196864 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8d4c000/0x0/0x4ffc00000, data 0x2465323/0x2510000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:02.120816+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 28196864 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:03.120987+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 28196864 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367100 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:04.121140+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 28196864 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:05.121292+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 28196864 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160400 session 0x5584d46ae5a0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:06.121460+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 28196864 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6161c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4b98c00
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8d4b000/0x0/0x4ffc00000, data 0x2465346/0x2511000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:07.121603+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130473984 unmapped: 28188672 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:08.121728+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415766 data_alloc: 234881024 data_used: 17817600
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:09.121880+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:10.122043+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:11.122197+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:12.122304+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8d4b000/0x0/0x4ffc00000, data 0x2465346/0x2511000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:13.122449+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415766 data_alloc: 234881024 data_used: 17817600
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:14.122607+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:15.122717+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8d4b000/0x0/0x4ffc00000, data 0x2465346/0x2511000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:16.122818+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.094482422s of 16.114328384s, submitted: 19
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:17.122926+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 136011776 unmapped: 22650880 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8d4b000/0x0/0x4ffc00000, data 0x2465346/0x2511000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:18.123037+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 22126592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1506740 data_alloc: 234881024 data_used: 18153472
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:19.123155+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 22126592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82da000/0x0/0x4ffc00000, data 0x2eb1346/0x2f5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:20.123305+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 22126592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:21.123437+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 22126592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:22.123575+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 22126592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82da000/0x0/0x4ffc00000, data 0x2eb1346/0x2f5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:23.123722+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135733248 unmapped: 22929408 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497156 data_alloc: 234881024 data_used: 18153472
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82fd000/0x0/0x4ffc00000, data 0x2eb3346/0x2f5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:24.123860+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135733248 unmapped: 22929408 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:25.123993+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135741440 unmapped: 22921216 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:26.124094+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135741440 unmapped: 22921216 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:27.124272+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135741440 unmapped: 22921216 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82fd000/0x0/0x4ffc00000, data 0x2eb3346/0x2f5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:28.124468+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.581636429s of 11.648424149s, submitted: 112
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135741440 unmapped: 22921216 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497380 data_alloc: 234881024 data_used: 18153472
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6161c00 session 0x5584d47f03c0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4b98c00 session 0x5584d2c2d0e0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:29.124628+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d3c56d20
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:30.124765+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:31.124933+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:32.125057+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:33.125187+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:34.125318+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:35.125496+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:36.125615+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:37.125745+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:38.125845+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:39.125995+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:40.126131+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:41.126302+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:42.126428+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:43.126579+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:44.126716+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:45.126853+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:46.126973+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:47.127104+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:48.127285+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:49.127432+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:50.127593+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130392064 unmapped: 28270592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:51.127750+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130392064 unmapped: 28270592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:52.127904+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130392064 unmapped: 28270592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:53.128075+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130392064 unmapped: 28270592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:54.128205+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130400256 unmapped: 28262400 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:55.128367+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130400256 unmapped: 28262400 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:56.128508+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130400256 unmapped: 28262400 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:57.128678+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130400256 unmapped: 28262400 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:58.128817+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 3198 syncs, 3.41 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3881 writes, 13K keys, 3881 commit groups, 1.0 writes per commit group, ingest: 16.69 MB, 0.03 MB/s
                                           Interval WAL: 3881 writes, 1691 syncs, 2.30 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:59.128949+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:00.129075+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:01.129191+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:02.129290+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:03.129441+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:04.129576+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:05.129712+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:06.129824+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130416640 unmapped: 28246016 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:07.129960+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130416640 unmapped: 28246016 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:08.130095+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130416640 unmapped: 28246016 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:09.130217+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130416640 unmapped: 28246016 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:10.130347+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130416640 unmapped: 28246016 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:11.130488+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130416640 unmapped: 28246016 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:12.130622+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:13.130770+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:14.130917+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:15.131051+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:16.131145+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:17.131236+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:18.131347+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:19.131459+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:20.131568+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130433024 unmapped: 28229632 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:21.131669+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130433024 unmapped: 28229632 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:22.131781+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130433024 unmapped: 28229632 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:23.131903+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130433024 unmapped: 28229632 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:24.132032+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130433024 unmapped: 28229632 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:25.132133+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130433024 unmapped: 28229632 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:26.132237+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130441216 unmapped: 28221440 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:27.132342+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130441216 unmapped: 28221440 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:28.132468+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:01 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130441216 unmapped: 28221440 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:29.132605+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: do_command 'config diff' '{prefix=config diff}'
Nov 25 10:03:01 compute-1 ceph-osd[77354]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 10:03:01 compute-1 ceph-osd[77354]: do_command 'config show' '{prefix=config show}'
Nov 25 10:03:01 compute-1 ceph-osd[77354]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 28467200 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 10:03:01 compute-1 ceph-osd[77354]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 10:03:01 compute-1 ceph-osd[77354]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 10:03:01 compute-1 ceph-osd[77354]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:30.132715+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129744896 unmapped: 28917760 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:03:01 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:03:01 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:31.132817+0000)
Nov 25 10:03:01 compute-1 ceph-osd[77354]: do_command 'log dump' '{prefix=log dump}'
Nov 25 10:03:01 compute-1 rsyslogd[963]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:03:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:01.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:01 compute-1 podman[239656]: 2025-11-25 10:03:01.853608302 +0000 UTC m=+0.098951721 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 10:03:02 compute-1 nova_compute[228683]: 2025-11-25 10:03:02.126 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:02 compute-1 ceph-mon[79643]: from='client.26971 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: from='client.27011 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: from='client.17316 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3838890863' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: from='client.17337 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1754728243' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: from='client.17340 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2891068716' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: from='client.17346 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: from='client.27050 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2721465140' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: from='client.17370 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3483676717' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2140606591' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: from='client.27074 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 25 10:03:02 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3671190065' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 25 10:03:02 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3398466942' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:03:02 compute-1 crontab[239907]: (root) LIST (root)
Nov 25 10:03:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 25 10:03:02 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3722671052' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: pgmap v955: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.17382 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/193731057' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.27083 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3671190065' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.27104 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/155015122' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.17406 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2366599518' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.27119 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.27131 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3398466942' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/33019584' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.17427 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.27100 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.17439 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:03 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3722671052' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:03.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:03 compute-1 sudo[239991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:03:03 compute-1 sudo[239991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:03 compute-1 sudo[239991]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:03.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:03 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 25 10:03:03 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2900811919' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 25 10:03:04 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4041057590' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.17457 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.27133 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.27179 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2732606932' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2380605087' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.17481 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3116723934' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.27151 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.27200 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2900811919' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.17505 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/4290597205' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.17517 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.27227 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/4041057590' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/26241685' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/774446042' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 25 10:03:04 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3845725757' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 25 10:03:04 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2991243437' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 25 10:03:04 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1688808163' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 25 10:03:04 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3862137652' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:03:04 compute-1 nova_compute[228683]: 2025-11-25 10:03:04.971 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:03:05.005 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:03:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:03:05.006 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:03:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:03:05.006 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:03:05 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 25 10:03:05 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3238745345' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.17529 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/4109658705' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3845725757' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2991243437' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.17547 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3003311665' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/290580060' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1703170186' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1688808163' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3862137652' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3310589035' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/771778548' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3093908409' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2822567927' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3119001524' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3238745345' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2123548578' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3730769304' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 10:03:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:05.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:05 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 25 10:03:05 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1416867171' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 25 10:03:05 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/513452612' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 25 10:03:05 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3947504297' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 25 10:03:05 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2940205370' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:03:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:05.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:05 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 25 10:03:05 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3097540885' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 25 10:03:05 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/608595569' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:03:06 compute-1 systemd[1]: Starting Hostname Service...
Nov 25 10:03:06 compute-1 systemd[1]: Started Hostname Service.
Nov 25 10:03:06 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 25 10:03:06 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/782075080' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 25 10:03:06 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3746439262' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3807673395' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1416867171' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/4267367237' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3265515761' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/513452612' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3947504297' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2940205370' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/20258841' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1025545117' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4260874306' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3097540885' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/608595569' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3348475619' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3744911263' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2422675866' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/782075080' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3746439262' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 25 10:03:06 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/253986332' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 25 10:03:06 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2637886262' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 25 10:03:06 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3775875560' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:03:07 compute-1 nova_compute[228683]: 2025-11-25 10:03:07.128 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:07 compute-1 ceph-mon[79643]: pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:07 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2516758813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:03:07 compute-1 ceph-mon[79643]: from='client.27413 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:07 compute-1 ceph-mon[79643]: from='client.27376 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:07 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/253986332' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4064436364' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2637886262' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3775875560' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:03:07 compute-1 ceph-mon[79643]: from='client.27443 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:07 compute-1 ceph-mon[79643]: from='client.27403 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:07 compute-1 ceph-mon[79643]: from='client.27409 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-1 ceph-mon[79643]: from='client.27464 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1261157585' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2561018519' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:03:07 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/193353403' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:07.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 25 10:03:07 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/700548910' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:03:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 25 10:03:07 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/287032064' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:03:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:07.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 25 10:03:07 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2342319568' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 25 10:03:08 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1806013605' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: from='client.27433 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: from='client.27442 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: from='client.17748 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: from='client.27506 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/700548910' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: from='client.17763 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: from='client.27463 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/287032064' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: from='client.17781 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: from='client.17784 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: from='client.27539 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2342319568' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: from='client.27490 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/522284379' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1806013605' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 25 10:03:08 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2447917133' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 25 10:03:08 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2201478624' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:08 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 25 10:03:09 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3067395760' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.27502 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.27566 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.17832 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2607549409' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.27526 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.27587 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2447917133' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2201478624' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.17862 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1669203688' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.27559 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.27623 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.17898 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1515064773' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3067395760' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3605452620' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:09.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:09.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:09 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 25 10:03:09 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2749113096' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:09 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-1 nova_compute[228683]: 2025-11-25 10:03:09.972 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:10 compute-1 ceph-mon[79643]: from='client.17925 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/471202242' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:03:10 compute-1 ceph-mon[79643]: from='client.27634 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:10 compute-1 ceph-mon[79643]: from='client.27637 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1905069120' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 10:03:10 compute-1 ceph-mon[79643]: from='client.27649 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:10 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:10 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2749113096' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:03:10 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:10 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:10 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:10 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3490083567' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:03:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2934849903' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 10:03:10 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Nov 25 10:03:10 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1728719757' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 10:03:10 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Nov 25 10:03:10 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3837461833' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 10:03:10 compute-1 podman[241296]: 2025-11-25 10:03:10.928193222 +0000 UTC m=+0.072263843 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 10:03:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:11.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:11 compute-1 ceph-mon[79643]: pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4246662599' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:03:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3270856375' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 10:03:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1728719757' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 10:03:11 compute-1 ceph-mon[79643]: from='client.17985 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1648111896' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 10:03:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3837461833' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 10:03:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2850184845' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:03:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3299312650' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 10:03:11 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Nov 25 10:03:11 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3037981209' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 10:03:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:11.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:11 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Nov 25 10:03:11 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1561355588' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 10:03:12 compute-1 nova_compute[228683]: 2025-11-25 10:03:12.129 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:12 compute-1 ceph-mon[79643]: from='client.27776 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3057636097' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 10:03:12 compute-1 ceph-mon[79643]: from='client.27724 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3037981209' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 10:03:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2392476456' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 10:03:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2060492933' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 10:03:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1561355588' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 10:03:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1806941140' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 10:03:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/73558721' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 10:03:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Nov 25 10:03:12 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2598971785' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 10:03:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:13.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:13 compute-1 ceph-mon[79643]: pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:13 compute-1 ceph-mon[79643]: from='client.18057 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:13 compute-1 ceph-mon[79643]: from='client.18063 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:13 compute-1 ceph-mon[79643]: from='client.27769 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2598971785' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 10:03:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1362293775' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 10:03:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/79477528' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 10:03:13 compute-1 ceph-mon[79643]: from='client.27854 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:13 compute-1 ceph-mon[79643]: from='client.27790 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4050823747' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 10:03:13 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Nov 25 10:03:13 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/691021732' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 10:03:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:13.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:13 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Nov 25 10:03:13 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2835888216' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 10:03:14 compute-1 ceph-mon[79643]: from='client.18090 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:14 compute-1 ceph-mon[79643]: from='client.27805 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:14 compute-1 ceph-mon[79643]: from='client.18099 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/691021732' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 10:03:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1257740203' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 10:03:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/74182607' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 10:03:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2835888216' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 10:03:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1956004436' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 10:03:14 compute-1 ovs-appctl[242510]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 10:03:14 compute-1 ovs-appctl[242515]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 10:03:14 compute-1 ovs-appctl[242522]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 10:03:14 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Nov 25 10:03:14 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1716749149' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 10:03:14 compute-1 nova_compute[228683]: 2025-11-25 10:03:14.973 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:15.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:15 compute-1 ceph-mon[79643]: from='client.18126 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-1 ceph-mon[79643]: pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:15 compute-1 ceph-mon[79643]: from='client.27917 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-1 ceph-mon[79643]: from='client.27850 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-1 ceph-mon[79643]: from='client.18144 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-1 ceph-mon[79643]: from='client.27935 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-1 ceph-mon[79643]: from='client.27862 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1861715961' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 10:03:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1716749149' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 10:03:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2128233638' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 10:03:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4168176874' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 10:03:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:03:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1859111447' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 10:03:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1141962610' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 10:03:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.002000020s ======
Nov 25 10:03:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:15.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Nov 25 10:03:15 compute-1 podman[243059]: 2025-11-25 10:03:15.839101179 +0000 UTC m=+0.092036855 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 10:03:15 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 25 10:03:15 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3646571918' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:03:16 compute-1 ceph-mon[79643]: from='client.18192 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:16 compute-1 ceph-mon[79643]: from='client.18198 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:16 compute-1 ceph-mon[79643]: from='client.27901 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:16 compute-1 ceph-mon[79643]: from='client.27974 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:16 compute-1 ceph-mon[79643]: from='client.27980 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:16 compute-1 ceph-mon[79643]: from='client.27907 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2714505728' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 10:03:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3646571918' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:03:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2300953125' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:03:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1812479581' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 10:03:16 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Nov 25 10:03:16 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2628656410' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 10:03:16 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Nov 25 10:03:16 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3544174585' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:17 compute-1 nova_compute[228683]: 2025-11-25 10:03:17.131 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:17.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Nov 25 10:03:17 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1306955108' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:17 compute-1 ceph-mon[79643]: pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2628656410' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 10:03:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1702968464' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 10:03:17 compute-1 ceph-mon[79643]: from='client.18252 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3544174585' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2826370956' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:17 compute-1 ceph-mon[79643]: from='client.18267 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2157215697' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:03:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1306955108' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Nov 25 10:03:17 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1651478465' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 25 10:03:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:17.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:18 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Nov 25 10:03:18 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/917369986' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Nov 25 10:03:18 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2584142324' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-1 ceph-mon[79643]: from='client.28028 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-1 ceph-mon[79643]: from='client.27961 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1562817346' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1753420583' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 10:03:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1651478465' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/537900919' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1210099267' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/917369986' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2147542039' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2584142324' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Nov 25 10:03:18 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1010117050' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:19.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:19 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Nov 25 10:03:19 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1563001551' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-1 ceph-mon[79643]: pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:19 compute-1 ceph-mon[79643]: from='client.28064 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3973466451' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1729700647' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-1 ceph-mon[79643]: from='client.28082 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-1 ceph-mon[79643]: from='client.28024 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1122338819' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1010117050' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3433541028' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3064872230' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1563001551' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:19.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:19 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Nov 25 10:03:19 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/887379790' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-1 nova_compute[228683]: 2025-11-25 10:03:19.975 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/258153879' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-1 ceph-mon[79643]: from='client.28121 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1555726303' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-1 ceph-mon[79643]: from='client.28127 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/887379790' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3697817588' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1050656807' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Nov 25 10:03:20 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2139458383' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Nov 25 10:03:21 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2197708573' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:21.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:21 compute-1 ceph-mon[79643]: from='client.18381 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-1 ceph-mon[79643]: pgmap v964: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:21 compute-1 ceph-mon[79643]: from='client.28145 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-1 ceph-mon[79643]: from='client.28063 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-1 ceph-mon[79643]: from='client.28151 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-1 ceph-mon[79643]: from='client.28069 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/820153740' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2139458383' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3958941425' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2197708573' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/4072822949' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2727700563' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:21.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:21 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Nov 25 10:03:21 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/733458927' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-1 nova_compute[228683]: 2025-11-25 10:03:22.134 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Nov 25 10:03:22 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3691621473' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:22 compute-1 ceph-mon[79643]: from='client.18429 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-1 ceph-mon[79643]: from='client.28190 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-1 ceph-mon[79643]: from='client.28105 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-1 ceph-mon[79643]: from='client.28196 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-1 ceph-mon[79643]: from='client.28114 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-1 ceph-mon[79643]: from='client.28120 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-1 ceph-mon[79643]: from='client.18468 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/733458927' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1370730776' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/665351589' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3691621473' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3438577591' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:23.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:23 compute-1 ceph-mon[79643]: pgmap v965: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:23 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1877302119' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-1 ceph-mon[79643]: from='client.28247 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-1 ceph-mon[79643]: from='client.28159 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-1 ceph-mon[79643]: from='client.18513 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-1 ceph-mon[79643]: from='client.28262 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-1 ceph-mon[79643]: from='client.28171 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2620515358' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2439157298' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3316437577' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-1 sudo[244560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:03:23 compute-1 sudo[244560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:23 compute-1 sudo[244560]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:23 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Nov 25 10:03:23 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/253141219' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:23.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:23 compute-1 virtqemud[228099]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 10:03:24 compute-1 systemd[1]: Starting Time & Date Service...
Nov 25 10:03:24 compute-1 systemd[1]: Started Time & Date Service.
Nov 25 10:03:24 compute-1 ceph-mon[79643]: from='client.18528 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:24 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/253141219' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:24 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2815871477' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:24 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3590049750' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:24 compute-1 ceph-mon[79643]: from='client.18561 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:24 compute-1 nova_compute[228683]: 2025-11-25 10:03:24.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:24 compute-1 nova_compute[228683]: 2025-11-25 10:03:24.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:03:24 compute-1 nova_compute[228683]: 2025-11-25 10:03:24.975 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:25.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:25 compute-1 ceph-mon[79643]: pgmap v966: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:25 compute-1 ceph-mon[79643]: from='client.18567 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:25 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/350362228' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:25 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2759359847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:25 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2308878960' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:25.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:26 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2944657567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:26 compute-1 nova_compute[228683]: 2025-11-25 10:03:26.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:26 compute-1 nova_compute[228683]: 2025-11-25 10:03:26.936 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:03:26 compute-1 nova_compute[228683]: 2025-11-25 10:03:26.936 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:03:26 compute-1 nova_compute[228683]: 2025-11-25 10:03:26.936 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:03:26 compute-1 nova_compute[228683]: 2025-11-25 10:03:26.937 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:03:26 compute-1 nova_compute[228683]: 2025-11-25 10:03:26.937 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:03:27 compute-1 nova_compute[228683]: 2025-11-25 10:03:27.138 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:03:27 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3206515216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:27 compute-1 nova_compute[228683]: 2025-11-25 10:03:27.268 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:03:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:27.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:27 compute-1 ceph-mon[79643]: pgmap v967: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:27 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3206515216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:27 compute-1 nova_compute[228683]: 2025-11-25 10:03:27.487 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:03:27 compute-1 nova_compute[228683]: 2025-11-25 10:03:27.488 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4775MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:03:27 compute-1 nova_compute[228683]: 2025-11-25 10:03:27.488 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:03:27 compute-1 nova_compute[228683]: 2025-11-25 10:03:27.488 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:03:27 compute-1 nova_compute[228683]: 2025-11-25 10:03:27.676 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:03:27 compute-1 nova_compute[228683]: 2025-11-25 10:03:27.676 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:03:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:27.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:27 compute-1 nova_compute[228683]: 2025-11-25 10:03:27.775 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Refreshing inventories for resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 10:03:27 compute-1 nova_compute[228683]: 2025-11-25 10:03:27.859 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Updating ProviderTree inventory for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 10:03:27 compute-1 nova_compute[228683]: 2025-11-25 10:03:27.859 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Updating inventory in ProviderTree for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:03:28 compute-1 nova_compute[228683]: 2025-11-25 10:03:28.124 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Refreshing aggregate associations for resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 10:03:28 compute-1 nova_compute[228683]: 2025-11-25 10:03:28.161 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Refreshing trait associations for resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SHA,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX512VAES,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 10:03:28 compute-1 nova_compute[228683]: 2025-11-25 10:03:28.193 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:03:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1101623836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:28 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:03:28 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4009067469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:28 compute-1 nova_compute[228683]: 2025-11-25 10:03:28.534 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:03:28 compute-1 nova_compute[228683]: 2025-11-25 10:03:28.537 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:03:28 compute-1 nova_compute[228683]: 2025-11-25 10:03:28.552 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:03:28 compute-1 nova_compute[228683]: 2025-11-25 10:03:28.553 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:03:28 compute-1 nova_compute[228683]: 2025-11-25 10:03:28.553 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:03:28 compute-1 nova_compute[228683]: 2025-11-25 10:03:28.554 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:29.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:29 compute-1 ceph-mon[79643]: pgmap v968: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:29 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/4009067469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:29 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1492594854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:29 compute-1 nova_compute[228683]: 2025-11-25 10:03:29.561 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:29 compute-1 nova_compute[228683]: 2025-11-25 10:03:29.562 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:29.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:29 compute-1 nova_compute[228683]: 2025-11-25 10:03:29.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:29 compute-1 nova_compute[228683]: 2025-11-25 10:03:29.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:03:29 compute-1 nova_compute[228683]: 2025-11-25 10:03:29.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:03:29 compute-1 nova_compute[228683]: 2025-11-25 10:03:29.924 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:03:29 compute-1 nova_compute[228683]: 2025-11-25 10:03:29.924 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:29 compute-1 nova_compute[228683]: 2025-11-25 10:03:29.977 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:03:30 compute-1 nova_compute[228683]: 2025-11-25 10:03:30.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:31.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:31 compute-1 ceph-mon[79643]: pgmap v969: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:31.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:31 compute-1 nova_compute[228683]: 2025-11-25 10:03:31.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:31 compute-1 nova_compute[228683]: 2025-11-25 10:03:31.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:31 compute-1 nova_compute[228683]: 2025-11-25 10:03:31.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 10:03:31 compute-1 nova_compute[228683]: 2025-11-25 10:03:31.916 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 10:03:32 compute-1 nova_compute[228683]: 2025-11-25 10:03:32.140 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:32 compute-1 sudo[245096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:03:32 compute-1 sudo[245096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:32 compute-1 sudo[245096]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:32 compute-1 podman[245120]: 2025-11-25 10:03:32.238076816 +0000 UTC m=+0.040844835 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:03:32 compute-1 sudo[245126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 25 10:03:32 compute-1 sudo[245126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:32 compute-1 sudo[245126]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:32 compute-1 sudo[245179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:03:32 compute-1 sudo[245179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:32 compute-1 sudo[245179]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:32 compute-1 sudo[245204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:03:32 compute-1 sudo[245204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:32 compute-1 sudo[245204]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:33 compute-1 sudo[245258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:03:33 compute-1 sudo[245258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:33 compute-1 sudo[245258]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:33 compute-1 sudo[245283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- inventory --format=json-pretty --filter-for-batch
Nov 25 10:03:33 compute-1 sudo[245283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:33.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:33 compute-1 podman[245340]: 2025-11-25 10:03:33.35433341 +0000 UTC m=+0.028390452 container create cc07c1e5c6aca80f5856f511abfee386293800272176d1c8a7d77088b916f81f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_driscoll, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 10:03:33 compute-1 systemd[1]: Started libpod-conmon-cc07c1e5c6aca80f5856f511abfee386293800272176d1c8a7d77088b916f81f.scope.
Nov 25 10:03:33 compute-1 systemd[1]: Started libcrun container.
Nov 25 10:03:33 compute-1 podman[245340]: 2025-11-25 10:03:33.405921314 +0000 UTC m=+0.079978355 container init cc07c1e5c6aca80f5856f511abfee386293800272176d1c8a7d77088b916f81f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_driscoll, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 10:03:33 compute-1 podman[245340]: 2025-11-25 10:03:33.41062549 +0000 UTC m=+0.084682532 container start cc07c1e5c6aca80f5856f511abfee386293800272176d1c8a7d77088b916f81f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 10:03:33 compute-1 podman[245340]: 2025-11-25 10:03:33.41199962 +0000 UTC m=+0.086056662 container attach cc07c1e5c6aca80f5856f511abfee386293800272176d1c8a7d77088b916f81f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 10:03:33 compute-1 nervous_driscoll[245353]: 167 167
Nov 25 10:03:33 compute-1 systemd[1]: libpod-cc07c1e5c6aca80f5856f511abfee386293800272176d1c8a7d77088b916f81f.scope: Deactivated successfully.
Nov 25 10:03:33 compute-1 podman[245340]: 2025-11-25 10:03:33.415539282 +0000 UTC m=+0.089596344 container died cc07c1e5c6aca80f5856f511abfee386293800272176d1c8a7d77088b916f81f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:03:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-1af1d29970a1d8670cef20e357eda67efc36879b3e36a6dd6128691d3cf7ce69-merged.mount: Deactivated successfully.
Nov 25 10:03:33 compute-1 podman[245340]: 2025-11-25 10:03:33.436933546 +0000 UTC m=+0.110990588 container remove cc07c1e5c6aca80f5856f511abfee386293800272176d1c8a7d77088b916f81f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:03:33 compute-1 podman[245340]: 2025-11-25 10:03:33.342792878 +0000 UTC m=+0.016849941 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:03:33 compute-1 systemd[1]: libpod-conmon-cc07c1e5c6aca80f5856f511abfee386293800272176d1c8a7d77088b916f81f.scope: Deactivated successfully.
Nov 25 10:03:33 compute-1 ceph-mon[79643]: pgmap v970: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:33 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:33 compute-1 podman[245376]: 2025-11-25 10:03:33.553590902 +0000 UTC m=+0.026844988 container create 9d23382e4cf05b4a0931521060ba6f116795c90de8461b451909ab902533ce7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 10:03:33 compute-1 systemd[1]: Started libpod-conmon-9d23382e4cf05b4a0931521060ba6f116795c90de8461b451909ab902533ce7e.scope.
Nov 25 10:03:33 compute-1 systemd[1]: Started libcrun container.
Nov 25 10:03:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b920a15d1727d11669d881200e79a1d27a2268195408b1d7a2bd97daedce2b3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b920a15d1727d11669d881200e79a1d27a2268195408b1d7a2bd97daedce2b3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b920a15d1727d11669d881200e79a1d27a2268195408b1d7a2bd97daedce2b3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b920a15d1727d11669d881200e79a1d27a2268195408b1d7a2bd97daedce2b3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:33 compute-1 podman[245376]: 2025-11-25 10:03:33.600637677 +0000 UTC m=+0.073891763 container init 9d23382e4cf05b4a0931521060ba6f116795c90de8461b451909ab902533ce7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 25 10:03:33 compute-1 podman[245376]: 2025-11-25 10:03:33.608347117 +0000 UTC m=+0.081601204 container start 9d23382e4cf05b4a0931521060ba6f116795c90de8461b451909ab902533ce7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_engelbart, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 10:03:33 compute-1 podman[245376]: 2025-11-25 10:03:33.609994523 +0000 UTC m=+0.083248609 container attach 9d23382e4cf05b4a0931521060ba6f116795c90de8461b451909ab902533ce7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 10:03:33 compute-1 podman[245376]: 2025-11-25 10:03:33.543229954 +0000 UTC m=+0.016484050 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:03:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:33.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:33 compute-1 nova_compute[228683]: 2025-11-25 10:03:33.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:33 compute-1 nova_compute[228683]: 2025-11-25 10:03:33.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:33 compute-1 nova_compute[228683]: 2025-11-25 10:03:33.895 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 10:03:34 compute-1 great_engelbart[245390]: [
Nov 25 10:03:34 compute-1 great_engelbart[245390]:     {
Nov 25 10:03:34 compute-1 great_engelbart[245390]:         "available": false,
Nov 25 10:03:34 compute-1 great_engelbart[245390]:         "being_replaced": false,
Nov 25 10:03:34 compute-1 great_engelbart[245390]:         "ceph_device_lvm": false,
Nov 25 10:03:34 compute-1 great_engelbart[245390]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:         "lsm_data": {},
Nov 25 10:03:34 compute-1 great_engelbart[245390]:         "lvs": [],
Nov 25 10:03:34 compute-1 great_engelbart[245390]:         "path": "/dev/sr0",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:         "rejected_reasons": [
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "Has a FileSystem",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "Insufficient space (<5GB)"
Nov 25 10:03:34 compute-1 great_engelbart[245390]:         ],
Nov 25 10:03:34 compute-1 great_engelbart[245390]:         "sys_api": {
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "actuators": null,
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "device_nodes": [
Nov 25 10:03:34 compute-1 great_engelbart[245390]:                 "sr0"
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             ],
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "devname": "sr0",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "human_readable_size": "474.00 KB",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "id_bus": "ata",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "model": "QEMU DVD-ROM",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "nr_requests": "64",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "parent": "/dev/sr0",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "partitions": {},
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "path": "/dev/sr0",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "removable": "1",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "rev": "2.5+",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "ro": "0",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "rotational": "1",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "sas_address": "",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "sas_device_handle": "",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "scheduler_mode": "mq-deadline",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "sectors": 0,
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "sectorsize": "2048",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "size": 485376.0,
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "support_discard": "2048",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "type": "disk",
Nov 25 10:03:34 compute-1 great_engelbart[245390]:             "vendor": "QEMU"
Nov 25 10:03:34 compute-1 great_engelbart[245390]:         }
Nov 25 10:03:34 compute-1 great_engelbart[245390]:     }
Nov 25 10:03:34 compute-1 great_engelbart[245390]: ]
Nov 25 10:03:34 compute-1 systemd[1]: libpod-9d23382e4cf05b4a0931521060ba6f116795c90de8461b451909ab902533ce7e.scope: Deactivated successfully.
Nov 25 10:03:34 compute-1 conmon[245390]: conmon 9d23382e4cf05b4a0931 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d23382e4cf05b4a0931521060ba6f116795c90de8461b451909ab902533ce7e.scope/container/memory.events
Nov 25 10:03:34 compute-1 podman[245376]: 2025-11-25 10:03:34.180305881 +0000 UTC m=+0.653559966 container died 9d23382e4cf05b4a0931521060ba6f116795c90de8461b451909ab902533ce7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:03:34 compute-1 systemd[1]: var-lib-containers-storage-overlay-b920a15d1727d11669d881200e79a1d27a2268195408b1d7a2bd97daedce2b3d-merged.mount: Deactivated successfully.
Nov 25 10:03:34 compute-1 podman[245376]: 2025-11-25 10:03:34.202329049 +0000 UTC m=+0.675583136 container remove 9d23382e4cf05b4a0931521060ba6f116795c90de8461b451909ab902533ce7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_engelbart, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 10:03:34 compute-1 sudo[245283]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:34 compute-1 systemd[1]: libpod-conmon-9d23382e4cf05b4a0931521060ba6f116795c90de8461b451909ab902533ce7e.scope: Deactivated successfully.
Nov 25 10:03:34 compute-1 nova_compute[228683]: 2025-11-25 10:03:34.977 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-1 ceph-mon[79643]: pgmap v971: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:03:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:03:35 compute-1 ceph-mon[79643]: pgmap v972: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Nov 25 10:03:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:03:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:03:35 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:03:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:35.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:35.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:37 compute-1 nova_compute[228683]: 2025-11-25 10:03:37.143 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:37.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:37 compute-1 ceph-mon[79643]: pgmap v973: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 889 B/s rd, 0 op/s
Nov 25 10:03:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:03:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:37.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.039020) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018039039, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1673, "num_deletes": 250, "total_data_size": 3512764, "memory_usage": 3576304, "flush_reason": "Manual Compaction"}
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018045491, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 2264580, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28481, "largest_seqno": 30148, "table_properties": {"data_size": 2256608, "index_size": 4402, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 21290, "raw_average_key_size": 21, "raw_value_size": 2238936, "raw_average_value_size": 2270, "num_data_blocks": 190, "num_entries": 986, "num_filter_entries": 986, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064931, "oldest_key_time": 1764064931, "file_creation_time": 1764065018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 6492 microseconds, and 3725 cpu microseconds.
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.045511) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 2264580 bytes OK
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.045521) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.046545) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.046555) EVENT_LOG_v1 {"time_micros": 1764065018046552, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.046564) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3503959, prev total WAL file size 3503959, number of live WAL files 2.
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.055382) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323531' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(2211KB)], [54(13MB)]
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018055424, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 16170877, "oldest_snapshot_seqno": -1}
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6322 keys, 14908226 bytes, temperature: kUnknown
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018090645, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 14908226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14865043, "index_size": 26305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 162230, "raw_average_key_size": 25, "raw_value_size": 14750254, "raw_average_value_size": 2333, "num_data_blocks": 1060, "num_entries": 6322, "num_filter_entries": 6322, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764065018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.090796) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 14908226 bytes
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.091113) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 458.4 rd, 422.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 13.3 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(13.7) write-amplify(6.6) OK, records in: 6840, records dropped: 518 output_compression: NoCompression
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.091125) EVENT_LOG_v1 {"time_micros": 1764065018091120, "job": 32, "event": "compaction_finished", "compaction_time_micros": 35277, "compaction_time_cpu_micros": 21346, "output_level": 6, "num_output_files": 1, "total_output_size": 14908226, "num_input_records": 6840, "num_output_records": 6322, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018091544, "job": 32, "event": "table_file_deletion", "file_number": 56}
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018092977, "job": 32, "event": "table_file_deletion", "file_number": 54}
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.047027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.093018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.093020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.093021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.093022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:03:38 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:03:38.093023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:03:38 compute-1 sudo[246758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:03:38 compute-1 sudo[246758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:38 compute-1 sudo[246758]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:39 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:39 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:39 compute-1 ceph-mon[79643]: pgmap v974: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Nov 25 10:03:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:39.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:39.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:39 compute-1 nova_compute[228683]: 2025-11-25 10:03:39.979 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:41.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:41 compute-1 ceph-mon[79643]: pgmap v975: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Nov 25 10:03:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:41.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:41 compute-1 podman[246785]: 2025-11-25 10:03:41.804921224 +0000 UTC m=+0.054805800 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 10:03:42 compute-1 nova_compute[228683]: 2025-11-25 10:03:42.143 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:43.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:43 compute-1 ceph-mon[79643]: pgmap v976: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 0 B/s wr, 207 op/s
Nov 25 10:03:43 compute-1 sudo[246809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:03:43 compute-1 sudo[246809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:43 compute-1 sudo[246809]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:43.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:44 compute-1 nova_compute[228683]: 2025-11-25 10:03:44.981 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:45.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:45 compute-1 ceph-mon[79643]: pgmap v977: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 0 B/s wr, 207 op/s
Nov 25 10:03:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:03:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:45.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:46 compute-1 podman[246835]: 2025-11-25 10:03:46.761946279 +0000 UTC m=+0.040451195 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 10:03:47 compute-1 nova_compute[228683]: 2025-11-25 10:03:47.146 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:47.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:47 compute-1 ceph-mon[79643]: pgmap v978: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 10:03:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:47.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:03:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:49.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:03:49 compute-1 ceph-mon[79643]: pgmap v979: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 10:03:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:49.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:49 compute-1 nova_compute[228683]: 2025-11-25 10:03:49.984 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:51.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:51 compute-1 ceph-mon[79643]: pgmap v980: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 10:03:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:51.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:52 compute-1 nova_compute[228683]: 2025-11-25 10:03:52.149 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:03:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:53.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:03:53 compute-1 ceph-mon[79643]: pgmap v981: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 10:03:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:53.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 10:03:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1838669055' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:03:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 10:03:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1838669055' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:03:54 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 10:03:54 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 10:03:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1838669055' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:03:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1838669055' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:03:54 compute-1 nova_compute[228683]: 2025-11-25 10:03:54.986 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:55.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:55 compute-1 ceph-mon[79643]: pgmap v982: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:55.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:57 compute-1 nova_compute[228683]: 2025-11-25 10:03:57.151 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:57.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:57 compute-1 ceph-mon[79643]: pgmap v983: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:03:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:57.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:59.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:59 compute-1 ceph-mon[79643]: pgmap v984: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:03:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:59 compute-1 nova_compute[228683]: 2025-11-25 10:03:59.989 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:00 compute-1 sudo[237901]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:00 compute-1 sshd-session[237900]: Received disconnect from 192.168.122.10 port 56116:11: disconnected by user
Nov 25 10:04:00 compute-1 sshd-session[237900]: Disconnected from user zuul 192.168.122.10 port 56116
Nov 25 10:04:00 compute-1 sshd-session[237897]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:04:00 compute-1 systemd[1]: session-53.scope: Deactivated successfully.
Nov 25 10:04:00 compute-1 systemd[1]: session-53.scope: Consumed 2min 4.018s CPU time, 660.2M memory peak, read 240.1M from disk, written 64.8M to disk.
Nov 25 10:04:00 compute-1 systemd-logind[746]: Session 53 logged out. Waiting for processes to exit.
Nov 25 10:04:00 compute-1 systemd-logind[746]: Removed session 53.
Nov 25 10:04:00 compute-1 sshd-session[246863]: Accepted publickey for zuul from 192.168.122.10 port 59136 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 10:04:00 compute-1 systemd-logind[746]: New session 54 of user zuul.
Nov 25 10:04:00 compute-1 systemd[1]: Started Session 54 of User zuul.
Nov 25 10:04:00 compute-1 sshd-session[246863]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:04:00 compute-1 sudo[246867]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-1-2025-11-25-gwpdeld.tar.xz
Nov 25 10:04:00 compute-1 sudo[246867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:04:00 compute-1 sudo[246867]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:00 compute-1 sshd-session[246866]: Received disconnect from 192.168.122.10 port 59136:11: disconnected by user
Nov 25 10:04:00 compute-1 sshd-session[246866]: Disconnected from user zuul 192.168.122.10 port 59136
Nov 25 10:04:00 compute-1 sshd-session[246863]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:04:00 compute-1 systemd[1]: session-54.scope: Deactivated successfully.
Nov 25 10:04:00 compute-1 systemd-logind[746]: Session 54 logged out. Waiting for processes to exit.
Nov 25 10:04:00 compute-1 systemd-logind[746]: Removed session 54.
Nov 25 10:04:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:04:00 compute-1 sshd-session[246892]: Accepted publickey for zuul from 192.168.122.10 port 59146 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 10:04:00 compute-1 systemd-logind[746]: New session 55 of user zuul.
Nov 25 10:04:00 compute-1 systemd[1]: Started Session 55 of User zuul.
Nov 25 10:04:00 compute-1 sshd-session[246892]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:04:00 compute-1 sudo[246896]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Nov 25 10:04:00 compute-1 sudo[246896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:04:00 compute-1 sudo[246896]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:00 compute-1 sshd-session[246895]: Received disconnect from 192.168.122.10 port 59146:11: disconnected by user
Nov 25 10:04:00 compute-1 sshd-session[246895]: Disconnected from user zuul 192.168.122.10 port 59146
Nov 25 10:04:00 compute-1 sshd-session[246892]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:04:00 compute-1 systemd[1]: session-55.scope: Deactivated successfully.
Nov 25 10:04:00 compute-1 systemd-logind[746]: Session 55 logged out. Waiting for processes to exit.
Nov 25 10:04:00 compute-1 systemd-logind[746]: Removed session 55.
Nov 25 10:04:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:01.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:01 compute-1 ceph-mon[79643]: pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:01.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:02 compute-1 nova_compute[228683]: 2025-11-25 10:04:02.153 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:02 compute-1 podman[246922]: 2025-11-25 10:04:02.785911771 +0000 UTC m=+0.039244419 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:04:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:03.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:03 compute-1 sudo[246940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:04:03 compute-1 sudo[246940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:03 compute-1 sudo[246940]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:03 compute-1 ceph-mon[79643]: pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:04:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:03.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:04 compute-1 nova_compute[228683]: 2025-11-25 10:04:04.989 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:04:05.006 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:04:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:04:05.006 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:04:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:04:05.006 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:04:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:04:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:05.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:04:05 compute-1 ceph-mon[79643]: pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:05.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:07 compute-1 nova_compute[228683]: 2025-11-25 10:04:07.155 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:07.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:07 compute-1 ceph-mon[79643]: pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:04:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:04:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:07.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:04:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:09.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:09 compute-1 ceph-mon[79643]: pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:09.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:09 compute-1 nova_compute[228683]: 2025-11-25 10:04:09.992 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:11.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:11 compute-1 ceph-mon[79643]: pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:11.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:12 compute-1 nova_compute[228683]: 2025-11-25 10:04:12.159 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:12 compute-1 podman[246969]: 2025-11-25 10:04:12.797041848 +0000 UTC m=+0.052008767 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 10:04:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:04:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:04:13 compute-1 ceph-mon[79643]: pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:13.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:14 compute-1 nova_compute[228683]: 2025-11-25 10:04:14.993 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:04:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:04:15 compute-1 ceph-mon[79643]: pgmap v992: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:04:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:15.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:17 compute-1 nova_compute[228683]: 2025-11-25 10:04:17.161 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:17.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:17 compute-1 ceph-mon[79643]: pgmap v993: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:17 compute-1 podman[246995]: 2025-11-25 10:04:17.778692868 +0000 UTC m=+0.034076458 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 10:04:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:17.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:19.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:19 compute-1 ceph-mon[79643]: pgmap v994: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:19.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:19 compute-1 nova_compute[228683]: 2025-11-25 10:04:19.994 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:04:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:21.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:04:21 compute-1 ceph-mon[79643]: pgmap v995: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:21.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:22 compute-1 nova_compute[228683]: 2025-11-25 10:04:22.162 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:04:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:23.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:04:23 compute-1 sudo[247015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:04:23 compute-1 sudo[247015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:23 compute-1 sudo[247015]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:23 compute-1 ceph-mon[79643]: pgmap v996: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:04:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:23.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:04:24 compute-1 nova_compute[228683]: 2025-11-25 10:04:24.996 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:25.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:25 compute-1 ceph-mon[79643]: pgmap v997: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:25.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:26 compute-1 nova_compute[228683]: 2025-11-25 10:04:26.905 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:26 compute-1 nova_compute[228683]: 2025-11-25 10:04:26.905 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:04:26 compute-1 nova_compute[228683]: 2025-11-25 10:04:26.906 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:26 compute-1 nova_compute[228683]: 2025-11-25 10:04:26.943 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:04:26 compute-1 nova_compute[228683]: 2025-11-25 10:04:26.943 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:04:26 compute-1 nova_compute[228683]: 2025-11-25 10:04:26.943 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:04:26 compute-1 nova_compute[228683]: 2025-11-25 10:04:26.944 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:04:26 compute-1 nova_compute[228683]: 2025-11-25 10:04:26.944 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:04:27 compute-1 nova_compute[228683]: 2025-11-25 10:04:27.165 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:04:27 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1505420710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:27 compute-1 nova_compute[228683]: 2025-11-25 10:04:27.277 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:04:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:27.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:27 compute-1 nova_compute[228683]: 2025-11-25 10:04:27.450 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:04:27 compute-1 nova_compute[228683]: 2025-11-25 10:04:27.451 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4885MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:04:27 compute-1 nova_compute[228683]: 2025-11-25 10:04:27.451 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:04:27 compute-1 nova_compute[228683]: 2025-11-25 10:04:27.452 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:04:27 compute-1 nova_compute[228683]: 2025-11-25 10:04:27.541 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:04:27 compute-1 nova_compute[228683]: 2025-11-25 10:04:27.541 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:04:27 compute-1 nova_compute[228683]: 2025-11-25 10:04:27.564 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:04:27 compute-1 ceph-mon[79643]: pgmap v998: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:27 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2853664828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:27 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1505420710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:04:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:27.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:04:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:04:27 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1216490648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:27 compute-1 nova_compute[228683]: 2025-11-25 10:04:27.898 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:04:27 compute-1 nova_compute[228683]: 2025-11-25 10:04:27.902 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:04:28 compute-1 nova_compute[228683]: 2025-11-25 10:04:28.051 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:04:28 compute-1 nova_compute[228683]: 2025-11-25 10:04:28.052 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:04:28 compute-1 nova_compute[228683]: 2025-11-25 10:04:28.052 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.052454) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068052473, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 706, "num_deletes": 256, "total_data_size": 1435923, "memory_usage": 1461856, "flush_reason": "Manual Compaction"}
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068055729, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 946095, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30153, "largest_seqno": 30854, "table_properties": {"data_size": 942585, "index_size": 1354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7391, "raw_average_key_size": 18, "raw_value_size": 935661, "raw_average_value_size": 2282, "num_data_blocks": 60, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764065018, "oldest_key_time": 1764065018, "file_creation_time": 1764065068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 3299 microseconds, and 2119 cpu microseconds.
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.055753) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 946095 bytes OK
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.055763) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.056166) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.056176) EVENT_LOG_v1 {"time_micros": 1764065068056173, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.056185) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1432105, prev total WAL file size 1432105, number of live WAL files 2.
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.056551) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(923KB)], [57(14MB)]
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068056567, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 15854321, "oldest_snapshot_seqno": -1}
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 6209 keys, 15728522 bytes, temperature: kUnknown
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068091623, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 15728522, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15684731, "index_size": 27169, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 161029, "raw_average_key_size": 25, "raw_value_size": 15570546, "raw_average_value_size": 2507, "num_data_blocks": 1094, "num_entries": 6209, "num_filter_entries": 6209, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764065068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.091917) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 15728522 bytes
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.092615) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 450.0 rd, 446.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.2 +0.0 blob) out(15.0 +0.0 blob), read-write-amplify(33.4) write-amplify(16.6) OK, records in: 6732, records dropped: 523 output_compression: NoCompression
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.092630) EVENT_LOG_v1 {"time_micros": 1764065068092623, "job": 34, "event": "compaction_finished", "compaction_time_micros": 35232, "compaction_time_cpu_micros": 22020, "output_level": 6, "num_output_files": 1, "total_output_size": 15728522, "num_input_records": 6732, "num_output_records": 6209, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068093112, "job": 34, "event": "table_file_deletion", "file_number": 59}
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068095255, "job": 34, "event": "table_file_deletion", "file_number": 57}
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.056526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.095391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.095395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.095397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.095398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:04:28 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:04:28.095400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:04:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/404904334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3681770167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1216490648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2066540253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:29 compute-1 nova_compute[228683]: 2025-11-25 10:04:29.041 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:29.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:29 compute-1 ceph-mon[79643]: pgmap v999: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:04:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:29.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:04:29 compute-1 nova_compute[228683]: 2025-11-25 10:04:29.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:29 compute-1 nova_compute[228683]: 2025-11-25 10:04:29.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:04:29 compute-1 nova_compute[228683]: 2025-11-25 10:04:29.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:04:29 compute-1 nova_compute[228683]: 2025-11-25 10:04:29.916 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:04:29 compute-1 nova_compute[228683]: 2025-11-25 10:04:29.997 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:04:30 compute-1 nova_compute[228683]: 2025-11-25 10:04:30.912 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:31.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:31 compute-1 ceph-mon[79643]: pgmap v1000: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:31.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:31 compute-1 nova_compute[228683]: 2025-11-25 10:04:31.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:32 compute-1 nova_compute[228683]: 2025-11-25 10:04:32.169 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:32 compute-1 nova_compute[228683]: 2025-11-25 10:04:32.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:33.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:33 compute-1 ceph-mon[79643]: pgmap v1001: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:33 compute-1 podman[247089]: 2025-11-25 10:04:33.782444629 +0000 UTC m=+0.037600912 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:04:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:33.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:33 compute-1 nova_compute[228683]: 2025-11-25 10:04:33.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:34 compute-1 nova_compute[228683]: 2025-11-25 10:04:34.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:34 compute-1 nova_compute[228683]: 2025-11-25 10:04:34.998 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:35.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:35 compute-1 ceph-mon[79643]: pgmap v1002: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:35.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:37 compute-1 nova_compute[228683]: 2025-11-25 10:04:37.171 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:37.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:37 compute-1 ceph-mon[79643]: pgmap v1003: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:37.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:37 compute-1 nova_compute[228683]: 2025-11-25 10:04:37.889 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:38 compute-1 sudo[247107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:04:38 compute-1 sudo[247107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:38 compute-1 sudo[247107]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:38 compute-1 sudo[247132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:04:38 compute-1 sudo[247132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:38 compute-1 sudo[247132]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:39.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:39.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:39 compute-1 ceph-mon[79643]: pgmap v1004: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:39 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:39 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:39 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:04:39 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:04:39 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:39 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:39 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:04:39 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:04:39 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:04:39 compute-1 nova_compute[228683]: 2025-11-25 10:04:39.998 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:40 compute-1 ceph-mon[79643]: pgmap v1005: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Nov 25 10:04:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:41.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:41.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:42 compute-1 nova_compute[228683]: 2025-11-25 10:04:42.173 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:42 compute-1 sudo[247188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:04:42 compute-1 sudo[247188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:42 compute-1 sudo[247188]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:42 compute-1 ceph-mon[79643]: pgmap v1006: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 845 B/s rd, 0 op/s
Nov 25 10:04:42 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:42 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:04:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:43.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:04:43 compute-1 sudo[247214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:04:43 compute-1 sudo[247214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:43 compute-1 sudo[247214]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:43 compute-1 podman[247238]: 2025-11-25 10:04:43.758440945 +0000 UTC m=+0.062602484 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:04:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:43.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:44 compute-1 ceph-mon[79643]: pgmap v1007: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Nov 25 10:04:45 compute-1 nova_compute[228683]: 2025-11-25 10:04:44.999 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:45.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:45.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:04:46 compute-1 ceph-mon[79643]: pgmap v1008: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Nov 25 10:04:47 compute-1 nova_compute[228683]: 2025-11-25 10:04:47.176 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:04:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:47.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:04:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:47.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:48 compute-1 podman[247264]: 2025-11-25 10:04:48.788909479 +0000 UTC m=+0.040018378 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 10:04:48 compute-1 ceph-mon[79643]: pgmap v1009: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Nov 25 10:04:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:49.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:49.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:50 compute-1 nova_compute[228683]: 2025-11-25 10:04:50.000 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:50 compute-1 ceph-mon[79643]: pgmap v1010: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Nov 25 10:04:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:51.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:51.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:52 compute-1 nova_compute[228683]: 2025-11-25 10:04:52.177 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:52 compute-1 ceph-mon[79643]: pgmap v1011: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:04:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:53.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:04:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:53.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 10:04:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/822873780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:04:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 10:04:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/822873780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:04:54 compute-1 ceph-mon[79643]: pgmap v1012: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/822873780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:04:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/822873780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:04:55 compute-1 nova_compute[228683]: 2025-11-25 10:04:55.003 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:04:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:55.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:04:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:55.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:56 compute-1 ceph-mon[79643]: pgmap v1013: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:57 compute-1 nova_compute[228683]: 2025-11-25 10:04:57.181 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:04:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:57.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:04:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:04:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:57.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:04:58 compute-1 ceph-mon[79643]: pgmap v1014: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:59.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:04:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:59.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:00 compute-1 nova_compute[228683]: 2025-11-25 10:05:00.003 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:00 compute-1 ceph-mon[79643]: pgmap v1015: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:05:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:01.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:01.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:02 compute-1 nova_compute[228683]: 2025-11-25 10:05:02.183 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:02 compute-1 ceph-mon[79643]: pgmap v1016: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:03.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:03 compute-1 sudo[247289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:05:03 compute-1 sudo[247289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:03 compute-1 sudo[247289]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:03.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:04 compute-1 podman[247314]: 2025-11-25 10:05:04.77781576 +0000 UTC m=+0.032468407 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 10:05:04 compute-1 ceph-mon[79643]: pgmap v1017: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:05 compute-1 nova_compute[228683]: 2025-11-25 10:05:05.005 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:05:05.007 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:05:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:05:05.007 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:05:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:05:05.007 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:05:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:05.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:05.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:06 compute-1 ceph-mon[79643]: pgmap v1018: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:07 compute-1 nova_compute[228683]: 2025-11-25 10:05:07.186 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:07.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:07.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:08 compute-1 ceph-mon[79643]: pgmap v1019: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:09.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:09.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:10 compute-1 nova_compute[228683]: 2025-11-25 10:05:10.006 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:10 compute-1 ceph-mon[79643]: pgmap v1020: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:05:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:11.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:05:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:11.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:12 compute-1 nova_compute[228683]: 2025-11-25 10:05:12.188 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:12 compute-1 ceph-mon[79643]: pgmap v1021: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:13.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:13.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:14 compute-1 podman[247335]: 2025-11-25 10:05:14.799115757 +0000 UTC m=+0.054171264 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:05:14 compute-1 ceph-mon[79643]: pgmap v1022: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:15 compute-1 nova_compute[228683]: 2025-11-25 10:05:15.007 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:15.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:15.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:05:16 compute-1 ceph-mon[79643]: pgmap v1023: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:17 compute-1 nova_compute[228683]: 2025-11-25 10:05:17.191 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:17.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:17.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:18 compute-1 ceph-mon[79643]: pgmap v1024: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:19.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:19 compute-1 podman[247362]: 2025-11-25 10:05:19.780894728 +0000 UTC m=+0.036282376 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 25 10:05:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:19.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:19 compute-1 ceph-mon[79643]: pgmap v1025: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:20 compute-1 nova_compute[228683]: 2025-11-25 10:05:20.009 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:05:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:21.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:05:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:21.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:22 compute-1 nova_compute[228683]: 2025-11-25 10:05:22.194 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:22 compute-1 ceph-mon[79643]: pgmap v1026: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:23.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:23 compute-1 sudo[247381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:05:23 compute-1 sudo[247381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:23 compute-1 sudo[247381]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:23.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:24 compute-1 ceph-mon[79643]: pgmap v1027: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.374190) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124374205, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 826, "num_deletes": 251, "total_data_size": 1762422, "memory_usage": 1793232, "flush_reason": "Manual Compaction"}
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124377462, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 1160685, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30859, "largest_seqno": 31680, "table_properties": {"data_size": 1156760, "index_size": 1705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8970, "raw_average_key_size": 19, "raw_value_size": 1148812, "raw_average_value_size": 2524, "num_data_blocks": 74, "num_entries": 455, "num_filter_entries": 455, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764065068, "oldest_key_time": 1764065068, "file_creation_time": 1764065124, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 3295 microseconds, and 2364 cpu microseconds.
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377484) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 1160685 bytes OK
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377493) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377805) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377814) EVENT_LOG_v1 {"time_micros": 1764065124377812, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377821) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1758117, prev total WAL file size 1758117, number of live WAL files 2.
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.378165) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(1133KB)], [60(14MB)]
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124378190, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 16889207, "oldest_snapshot_seqno": -1}
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6146 keys, 14784913 bytes, temperature: kUnknown
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124413546, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 14784913, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14742286, "index_size": 26133, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 160379, "raw_average_key_size": 26, "raw_value_size": 14629868, "raw_average_value_size": 2380, "num_data_blocks": 1047, "num_entries": 6146, "num_filter_entries": 6146, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764065124, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.413674) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14784913 bytes
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.414046) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 477.2 rd, 417.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 15.0 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(27.3) write-amplify(12.7) OK, records in: 6664, records dropped: 518 output_compression: NoCompression
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.414058) EVENT_LOG_v1 {"time_micros": 1764065124414053, "job": 36, "event": "compaction_finished", "compaction_time_micros": 35390, "compaction_time_cpu_micros": 21557, "output_level": 6, "num_output_files": 1, "total_output_size": 14784913, "num_input_records": 6664, "num_output_records": 6146, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124414228, "job": 36, "event": "table_file_deletion", "file_number": 62}
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124416078, "job": 36, "event": "table_file_deletion", "file_number": 60}
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.378119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.416097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.416100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.416101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.416102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:05:24 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:05:24.416103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:05:25 compute-1 nova_compute[228683]: 2025-11-25 10:05:25.011 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:25.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:25.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:26 compute-1 ceph-mon[79643]: pgmap v1028: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:26 compute-1 nova_compute[228683]: 2025-11-25 10:05:26.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:26 compute-1 nova_compute[228683]: 2025-11-25 10:05:26.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:05:26 compute-1 nova_compute[228683]: 2025-11-25 10:05:26.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.024 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.025 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.025 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.025 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.025 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.197 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:05:27 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2026839709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.358 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:05:27 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2026839709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:05:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:05:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:27.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.554 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.555 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4873MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.555 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.555 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.610 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.610 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.629 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:05:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:27.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:05:27 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1737718634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.961 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.965 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.981 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.982 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:05:27 compute-1 nova_compute[228683]: 2025-11-25 10:05:27.982 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:05:28 compute-1 ceph-mon[79643]: pgmap v1029: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/251651596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:05:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3162503918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:05:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1737718634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:05:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2471864516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:05:29 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/879975490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:05:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:05:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:29.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:05:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:29.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:29 compute-1 nova_compute[228683]: 2025-11-25 10:05:29.982 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:30 compute-1 nova_compute[228683]: 2025-11-25 10:05:30.011 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:30 compute-1 ceph-mon[79643]: pgmap v1030: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:05:30 compute-1 nova_compute[228683]: 2025-11-25 10:05:30.889 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:30 compute-1 nova_compute[228683]: 2025-11-25 10:05:30.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:30 compute-1 nova_compute[228683]: 2025-11-25 10:05:30.893 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:05:30 compute-1 nova_compute[228683]: 2025-11-25 10:05:30.893 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:05:30 compute-1 nova_compute[228683]: 2025-11-25 10:05:30.903 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:05:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:31.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:31.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:32 compute-1 nova_compute[228683]: 2025-11-25 10:05:32.200 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:32 compute-1 ceph-mon[79643]: pgmap v1031: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:05:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:33.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:05:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:33.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:05:33 compute-1 nova_compute[228683]: 2025-11-25 10:05:33.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:33 compute-1 nova_compute[228683]: 2025-11-25 10:05:33.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:34 compute-1 ceph-mon[79643]: pgmap v1032: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:35 compute-1 nova_compute[228683]: 2025-11-25 10:05:35.014 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:35.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:35 compute-1 podman[247456]: 2025-11-25 10:05:35.783864599 +0000 UTC m=+0.035359427 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:05:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:35.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:35 compute-1 nova_compute[228683]: 2025-11-25 10:05:35.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:36 compute-1 ceph-mon[79643]: pgmap v1033: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:36 compute-1 nova_compute[228683]: 2025-11-25 10:05:36.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:37 compute-1 nova_compute[228683]: 2025-11-25 10:05:37.204 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:37.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:37.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:38 compute-1 ceph-mon[79643]: pgmap v1034: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:05:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:39.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:39.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:40 compute-1 nova_compute[228683]: 2025-11-25 10:05:40.015 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:40 compute-1 ceph-mon[79643]: pgmap v1035: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:41.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:41.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:42 compute-1 nova_compute[228683]: 2025-11-25 10:05:42.207 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:42 compute-1 ceph-mon[79643]: pgmap v1036: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:05:42 compute-1 sudo[247475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:05:42 compute-1 sudo[247475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:42 compute-1 sudo[247475]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:42 compute-1 sudo[247500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:05:42 compute-1 sudo[247500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:43 compute-1 sudo[247500]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:43 compute-1 sudo[247554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:05:43 compute-1 sudo[247554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:43 compute-1 sudo[247554]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:43 compute-1 sudo[247580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 25 10:05:43 compute-1 sudo[247580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:43 compute-1 sudo[247580]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:43.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:43 compute-1 sudo[247621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:05:43 compute-1 sudo[247621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:43 compute-1 sudo[247621]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:43.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:44 compute-1 ceph-mon[79643]: pgmap v1037: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:05:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:05:44 compute-1 ceph-mon[79643]: pgmap v1038: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 600 B/s rd, 0 op/s
Nov 25 10:05:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:05:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:05:44 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:05:45 compute-1 nova_compute[228683]: 2025-11-25 10:05:45.016 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:05:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:45.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:05:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:05:45 compute-1 podman[247647]: 2025-11-25 10:05:45.795355488 +0000 UTC m=+0.048996110 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:05:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:45.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:46 compute-1 ceph-mon[79643]: pgmap v1039: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 600 B/s rd, 0 op/s
Nov 25 10:05:46 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:46 compute-1 sudo[247671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:05:46 compute-1 sudo[247671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:46 compute-1 sudo[247671]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:47 compute-1 nova_compute[228683]: 2025-11-25 10:05:47.208 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:47.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:05:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:47.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:05:48 compute-1 ceph-mon[79643]: pgmap v1040: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 600 B/s rd, 0 op/s
Nov 25 10:05:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:49.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:05:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:49.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:05:50 compute-1 nova_compute[228683]: 2025-11-25 10:05:50.017 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:50 compute-1 ceph-mon[79643]: pgmap v1041: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 600 B/s rd, 0 op/s
Nov 25 10:05:50 compute-1 podman[247698]: 2025-11-25 10:05:50.783911942 +0000 UTC m=+0.038131212 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 10:05:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:51.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:51.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:52 compute-1 nova_compute[228683]: 2025-11-25 10:05:52.211 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:52 compute-1 ceph-mon[79643]: pgmap v1042: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 901 B/s rd, 0 op/s
Nov 25 10:05:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:05:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:53.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:05:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:53.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:54 compute-1 ceph-mon[79643]: pgmap v1043: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 901 B/s rd, 0 op/s
Nov 25 10:05:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1723778353' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:05:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1723778353' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:05:55 compute-1 nova_compute[228683]: 2025-11-25 10:05:55.019 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:05:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:55.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:05:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:55.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:56 compute-1 ceph-mon[79643]: pgmap v1044: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:57 compute-1 nova_compute[228683]: 2025-11-25 10:05:57.214 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:57.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:57.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:58 compute-1 ceph-mon[79643]: pgmap v1045: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:05:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:59.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:05:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:59.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:00 compute-1 nova_compute[228683]: 2025-11-25 10:06:00.020 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:00 compute-1 ceph-mon[79643]: pgmap v1046: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:06:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:06:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:01.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:06:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:01.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:02 compute-1 nova_compute[228683]: 2025-11-25 10:06:02.218 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:02 compute-1 ceph-mon[79643]: pgmap v1047: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:06:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:03.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:03.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:03 compute-1 sudo[247722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:06:03 compute-1 sudo[247722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:03 compute-1 sudo[247722]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:04 compute-1 ceph-mon[79643]: pgmap v1048: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:06:05.008 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:06:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:06:05.008 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:06:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:06:05.008 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:06:05 compute-1 nova_compute[228683]: 2025-11-25 10:06:05.022 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:05.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:05.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:06 compute-1 ceph-mon[79643]: pgmap v1049: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:06 compute-1 podman[247749]: 2025-11-25 10:06:06.783911524 +0000 UTC m=+0.035596864 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:06:07 compute-1 nova_compute[228683]: 2025-11-25 10:06:07.221 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:07.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:07.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:08 compute-1 ceph-mon[79643]: pgmap v1050: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:09.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:09.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:10 compute-1 nova_compute[228683]: 2025-11-25 10:06:10.023 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:10 compute-1 ceph-mon[79643]: pgmap v1051: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:06:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:11.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:06:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:11.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:12 compute-1 nova_compute[228683]: 2025-11-25 10:06:12.224 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:12 compute-1 ceph-mon[79643]: pgmap v1052: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:13.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:13.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:14 compute-1 ceph-mon[79643]: pgmap v1053: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:15 compute-1 nova_compute[228683]: 2025-11-25 10:06:15.024 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:15.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:06:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:15.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:16 compute-1 ceph-mon[79643]: pgmap v1054: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:16 compute-1 podman[247770]: 2025-11-25 10:06:16.804062212 +0000 UTC m=+0.059619952 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:06:17 compute-1 nova_compute[228683]: 2025-11-25 10:06:17.225 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:17.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:17.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:18 compute-1 ceph-mon[79643]: pgmap v1055: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:19.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:19.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:20 compute-1 nova_compute[228683]: 2025-11-25 10:06:20.027 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:20 compute-1 ceph-mon[79643]: pgmap v1056: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:06:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:21.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:06:21 compute-1 podman[247796]: 2025-11-25 10:06:21.784243363 +0000 UTC m=+0.039255550 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 10:06:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:21.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:22 compute-1 nova_compute[228683]: 2025-11-25 10:06:22.229 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:22 compute-1 ceph-mon[79643]: pgmap v1057: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:06:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:23.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:06:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:23.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:23 compute-1 sudo[247814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:06:23 compute-1 sudo[247814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:23 compute-1 sudo[247814]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:24 compute-1 ceph-mon[79643]: pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:25 compute-1 nova_compute[228683]: 2025-11-25 10:06:25.028 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:06:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:25.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:06:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:25.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:26 compute-1 ceph-mon[79643]: pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:26 compute-1 nova_compute[228683]: 2025-11-25 10:06:26.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:26 compute-1 nova_compute[228683]: 2025-11-25 10:06:26.913 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:06:26 compute-1 nova_compute[228683]: 2025-11-25 10:06:26.914 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:06:26 compute-1 nova_compute[228683]: 2025-11-25 10:06:26.914 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:06:26 compute-1 nova_compute[228683]: 2025-11-25 10:06:26.914 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:06:26 compute-1 nova_compute[228683]: 2025-11-25 10:06:26.914 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:06:27 compute-1 nova_compute[228683]: 2025-11-25 10:06:27.230 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:06:27 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1588309167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:27 compute-1 nova_compute[228683]: 2025-11-25 10:06:27.248 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:06:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:27 compute-1 nova_compute[228683]: 2025-11-25 10:06:27.442 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:06:27 compute-1 nova_compute[228683]: 2025-11-25 10:06:27.443 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4886MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:06:27 compute-1 nova_compute[228683]: 2025-11-25 10:06:27.443 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:06:27 compute-1 nova_compute[228683]: 2025-11-25 10:06:27.443 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:06:27 compute-1 nova_compute[228683]: 2025-11-25 10:06:27.548 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:06:27 compute-1 nova_compute[228683]: 2025-11-25 10:06:27.548 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:06:27 compute-1 nova_compute[228683]: 2025-11-25 10:06:27.574 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:06:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:27.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:27 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1588309167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:06:27 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2385987291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:27 compute-1 nova_compute[228683]: 2025-11-25 10:06:27.914 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:06:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:27.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:27 compute-1 nova_compute[228683]: 2025-11-25 10:06:27.918 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:06:27 compute-1 nova_compute[228683]: 2025-11-25 10:06:27.929 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:06:27 compute-1 nova_compute[228683]: 2025-11-25 10:06:27.931 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:06:27 compute-1 nova_compute[228683]: 2025-11-25 10:06:27.931 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:06:28 compute-1 ceph-mon[79643]: pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2385987291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:29.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:29 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/407174291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:29.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:29 compute-1 nova_compute[228683]: 2025-11-25 10:06:29.931 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:29 compute-1 nova_compute[228683]: 2025-11-25 10:06:29.932 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:06:30 compute-1 nova_compute[228683]: 2025-11-25 10:06:30.029 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:30 compute-1 ceph-mon[79643]: pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:30 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2752908873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:06:30 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2902791618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:30 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3383596287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:30 compute-1 nova_compute[228683]: 2025-11-25 10:06:30.889 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:30 compute-1 nova_compute[228683]: 2025-11-25 10:06:30.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:30 compute-1 nova_compute[228683]: 2025-11-25 10:06:30.893 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:06:30 compute-1 nova_compute[228683]: 2025-11-25 10:06:30.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:06:30 compute-1 nova_compute[228683]: 2025-11-25 10:06:30.912 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:06:30 compute-1 nova_compute[228683]: 2025-11-25 10:06:30.912 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:31.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:31.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:32 compute-1 nova_compute[228683]: 2025-11-25 10:06:32.234 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:32 compute-1 ceph-mon[79643]: pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:33.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:33 compute-1 nova_compute[228683]: 2025-11-25 10:06:33.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:33.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:34 compute-1 ceph-mon[79643]: pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:34 compute-1 nova_compute[228683]: 2025-11-25 10:06:34.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:35 compute-1 nova_compute[228683]: 2025-11-25 10:06:35.032 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:35.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:35 compute-1 nova_compute[228683]: 2025-11-25 10:06:35.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:35.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:36 compute-1 ceph-mon[79643]: pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:36 compute-1 nova_compute[228683]: 2025-11-25 10:06:36.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:37 compute-1 nova_compute[228683]: 2025-11-25 10:06:37.236 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:37.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:37 compute-1 podman[247890]: 2025-11-25 10:06:37.779245405 +0000 UTC m=+0.032062662 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 10:06:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:37.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:38 compute-1 ceph-mon[79643]: pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:38 compute-1 nova_compute[228683]: 2025-11-25 10:06:38.889 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:39.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:39.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:40 compute-1 nova_compute[228683]: 2025-11-25 10:06:40.034 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:40 compute-1 ceph-mon[79643]: pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:06:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:41.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:06:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:06:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:41.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:06:42 compute-1 nova_compute[228683]: 2025-11-25 10:06:42.240 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:42 compute-1 ceph-mon[79643]: pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:43.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:43.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:44 compute-1 sudo[247909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:06:44 compute-1 sudo[247909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:44 compute-1 sudo[247909]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:44 compute-1 ceph-mon[79643]: pgmap v1068: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:45 compute-1 nova_compute[228683]: 2025-11-25 10:06:45.034 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:45.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:06:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:45.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:46 compute-1 sudo[247935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:06:46 compute-1 sudo[247935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:46 compute-1 sudo[247935]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:46 compute-1 ceph-mon[79643]: pgmap v1069: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:46 compute-1 sudo[247960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:06:46 compute-1 sudo[247960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:46 compute-1 podman[247984]: 2025-11-25 10:06:46.933801714 +0000 UTC m=+0.056086423 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:06:47 compute-1 nova_compute[228683]: 2025-11-25 10:06:47.240 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:47 compute-1 sudo[247960]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:06:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:47.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:06:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 10:06:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 10:06:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 10:06:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:06:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:06:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:06:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:06:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:06:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:06:47 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:06:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:06:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:47.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:06:48 compute-1 ceph-mon[79643]: pgmap v1070: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 782 B/s rd, 0 op/s
Nov 25 10:06:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:49.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:49.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:50 compute-1 nova_compute[228683]: 2025-11-25 10:06:50.034 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:50 compute-1 sudo[248039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:06:50 compute-1 sudo[248039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:50 compute-1 sudo[248039]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:50 compute-1 ceph-mon[79643]: pgmap v1071: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 521 B/s rd, 0 op/s
Nov 25 10:06:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:06:50 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:06:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:51.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:51.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:52 compute-1 nova_compute[228683]: 2025-11-25 10:06:52.243 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:52 compute-1 podman[248065]: 2025-11-25 10:06:52.783159105 +0000 UTC m=+0.038481030 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:06:52 compute-1 ceph-mon[79643]: pgmap v1072: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 782 B/s rd, 0 op/s
Nov 25 10:06:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:53.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 10:06:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/353881434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:06:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 10:06:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/353881434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:06:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:53.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:54 compute-1 ceph-mon[79643]: pgmap v1073: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 521 B/s rd, 0 op/s
Nov 25 10:06:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/353881434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:06:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/353881434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:06:55 compute-1 nova_compute[228683]: 2025-11-25 10:06:55.037 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:55.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:55.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:56 compute-1 ceph-mon[79643]: pgmap v1074: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 521 B/s rd, 0 op/s
Nov 25 10:06:57 compute-1 nova_compute[228683]: 2025-11-25 10:06:57.245 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:57.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:57 compute-1 nova_compute[228683]: 2025-11-25 10:06:57.679 228687 DEBUG oslo_concurrency.processutils [None req-dbea7ddb-eb32-4e63-954a-7e69465c4db7 331b917bd3774be79aebd5ee1af3b1fa f414368112e54eacbcaf4af631b3b667 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:06:57 compute-1 nova_compute[228683]: 2025-11-25 10:06:57.704 228687 DEBUG oslo_concurrency.processutils [None req-dbea7ddb-eb32-4e63-954a-7e69465c4db7 331b917bd3774be79aebd5ee1af3b1fa f414368112e54eacbcaf4af631b3b667 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:06:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:57.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:58 compute-1 ceph-mon[79643]: pgmap v1075: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 782 B/s rd, 0 op/s
Nov 25 10:06:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:59.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:06:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:06:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:59.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:07:00 compute-1 nova_compute[228683]: 2025-11-25 10:07:00.039 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:00 compute-1 ceph-mon[79643]: pgmap v1076: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:07:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:01.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:01.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:02 compute-1 nova_compute[228683]: 2025-11-25 10:07:02.247 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:02 compute-1 ceph-mon[79643]: pgmap v1077: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:03.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:03.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:04 compute-1 sudo[248089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:07:04 compute-1 sudo[248089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:04 compute-1 sudo[248089]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:07:04.424 142940 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:07:04 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:07:04.425 142940 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 10:07:04 compute-1 nova_compute[228683]: 2025-11-25 10:07:04.424 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:04 compute-1 ceph-mon[79643]: pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:07:05.008 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:07:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:07:05.009 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:07:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:07:05.009 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:07:05 compute-1 nova_compute[228683]: 2025-11-25 10:07:05.039 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:05.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:05.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:06 compute-1 ceph-mon[79643]: pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:07 compute-1 nova_compute[228683]: 2025-11-25 10:07:07.250 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:07.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:07.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:08 compute-1 podman[248116]: 2025-11-25 10:07:08.782638806 +0000 UTC m=+0.035867736 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:07:08 compute-1 ceph-mon[79643]: pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:09 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:07:09.427 142940 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ad0cdb86-b3c6-44c6-a890-1db2efa57d2b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:07:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:07:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:09.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:07:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:09.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:10 compute-1 nova_compute[228683]: 2025-11-25 10:07:10.040 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:10 compute-1 ceph-mon[79643]: pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:11.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:11.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:12 compute-1 nova_compute[228683]: 2025-11-25 10:07:12.253 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:12 compute-1 ceph-mon[79643]: pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:13.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:13.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:14 compute-1 ceph-mon[79643]: pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:15 compute-1 nova_compute[228683]: 2025-11-25 10:07:15.043 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:15.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:15.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:07:16 compute-1 ceph-mon[79643]: pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:17 compute-1 nova_compute[228683]: 2025-11-25 10:07:17.256 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:07:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:17.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:07:17 compute-1 podman[248137]: 2025-11-25 10:07:17.799070426 +0000 UTC m=+0.053547688 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 10:07:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:17.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:18 compute-1 ceph-mon[79643]: pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:07:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:19.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:07:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:19.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:20 compute-1 nova_compute[228683]: 2025-11-25 10:07:20.044 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:20 compute-1 ceph-mon[79643]: pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:21.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:21.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:22 compute-1 nova_compute[228683]: 2025-11-25 10:07:22.259 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:22 compute-1 ceph-mon[79643]: pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:23.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:23 compute-1 podman[248163]: 2025-11-25 10:07:23.778425779 +0000 UTC m=+0.034051190 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 25 10:07:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:23.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:24 compute-1 ceph-mon[79643]: pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:24 compute-1 sudo[248181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:07:24 compute-1 sudo[248181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:24 compute-1 sudo[248181]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:25 compute-1 nova_compute[228683]: 2025-11-25 10:07:25.046 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:07:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:25.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:07:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:25.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:26 compute-1 ceph-mon[79643]: pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:26 compute-1 nova_compute[228683]: 2025-11-25 10:07:26.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:26 compute-1 nova_compute[228683]: 2025-11-25 10:07:26.912 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:07:26 compute-1 nova_compute[228683]: 2025-11-25 10:07:26.912 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:07:26 compute-1 nova_compute[228683]: 2025-11-25 10:07:26.912 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:07:26 compute-1 nova_compute[228683]: 2025-11-25 10:07:26.912 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:07:26 compute-1 nova_compute[228683]: 2025-11-25 10:07:26.913 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:07:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:07:27 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3596999287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:27 compute-1 nova_compute[228683]: 2025-11-25 10:07:27.249 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:07:27 compute-1 nova_compute[228683]: 2025-11-25 10:07:27.262 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:27 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3596999287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:27 compute-1 nova_compute[228683]: 2025-11-25 10:07:27.445 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:07:27 compute-1 nova_compute[228683]: 2025-11-25 10:07:27.446 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4886MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:07:27 compute-1 nova_compute[228683]: 2025-11-25 10:07:27.446 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:07:27 compute-1 nova_compute[228683]: 2025-11-25 10:07:27.447 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:07:27 compute-1 nova_compute[228683]: 2025-11-25 10:07:27.497 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:07:27 compute-1 nova_compute[228683]: 2025-11-25 10:07:27.497 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:07:27 compute-1 nova_compute[228683]: 2025-11-25 10:07:27.528 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:07:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:27.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:07:27 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2678034550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:27 compute-1 nova_compute[228683]: 2025-11-25 10:07:27.864 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:07:27 compute-1 nova_compute[228683]: 2025-11-25 10:07:27.868 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:07:27 compute-1 nova_compute[228683]: 2025-11-25 10:07:27.885 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:07:27 compute-1 nova_compute[228683]: 2025-11-25 10:07:27.886 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:07:27 compute-1 nova_compute[228683]: 2025-11-25 10:07:27.887 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:07:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:27.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:28 compute-1 ceph-mon[79643]: pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2678034550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:07:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:29.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:07:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:29.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:30 compute-1 nova_compute[228683]: 2025-11-25 10:07:30.047 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:30 compute-1 ceph-mon[79643]: pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:07:30 compute-1 nova_compute[228683]: 2025-11-25 10:07:30.887 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:30 compute-1 nova_compute[228683]: 2025-11-25 10:07:30.887 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:07:30 compute-1 nova_compute[228683]: 2025-11-25 10:07:30.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:31 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4223156061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:31 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1450865442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:31.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:31 compute-1 nova_compute[228683]: 2025-11-25 10:07:31.890 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:31 compute-1 nova_compute[228683]: 2025-11-25 10:07:31.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:31 compute-1 nova_compute[228683]: 2025-11-25 10:07:31.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:07:31 compute-1 nova_compute[228683]: 2025-11-25 10:07:31.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:07:31 compute-1 nova_compute[228683]: 2025-11-25 10:07:31.905 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:07:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:07:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:31.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:07:32 compute-1 nova_compute[228683]: 2025-11-25 10:07:32.265 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:32 compute-1 ceph-mon[79643]: pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:32 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3449064442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:32 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2299839423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:33.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:33 compute-1 nova_compute[228683]: 2025-11-25 10:07:33.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:33.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:34 compute-1 ceph-mon[79643]: pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:35 compute-1 nova_compute[228683]: 2025-11-25 10:07:35.048 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:35.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:07:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:35.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:07:36 compute-1 ceph-mon[79643]: pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:36 compute-1 nova_compute[228683]: 2025-11-25 10:07:36.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:37 compute-1 nova_compute[228683]: 2025-11-25 10:07:37.267 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:37.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:37 compute-1 nova_compute[228683]: 2025-11-25 10:07:37.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:37 compute-1 nova_compute[228683]: 2025-11-25 10:07:37.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:37.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:38 compute-1 ceph-mon[79643]: pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:39.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:39 compute-1 podman[248258]: 2025-11-25 10:07:39.783392521 +0000 UTC m=+0.034543729 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 10:07:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:39.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:40 compute-1 nova_compute[228683]: 2025-11-25 10:07:40.049 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:40 compute-1 ceph-mon[79643]: pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:41.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:41.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:42 compute-1 nova_compute[228683]: 2025-11-25 10:07:42.270 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:42 compute-1 ceph-mon[79643]: pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:43.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:44.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:44 compute-1 sudo[248276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:07:44 compute-1 sudo[248276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:44 compute-1 sudo[248276]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:44 compute-1 ceph-mon[79643]: pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:45 compute-1 nova_compute[228683]: 2025-11-25 10:07:45.051 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:07:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:07:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:45.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:07:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:07:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:46.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:07:46 compute-1 ceph-mon[79643]: pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:47 compute-1 nova_compute[228683]: 2025-11-25 10:07:47.274 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:47.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:48.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:48 compute-1 ceph-mon[79643]: pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:48 compute-1 podman[248303]: 2025-11-25 10:07:48.800042187 +0000 UTC m=+0.055096417 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 10:07:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:49.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:50.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:50 compute-1 nova_compute[228683]: 2025-11-25 10:07:50.051 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:50 compute-1 ceph-mon[79643]: pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:50 compute-1 sudo[248327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:07:50 compute-1 sudo[248327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:50 compute-1 sudo[248327]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:50 compute-1 sudo[248352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:07:50 compute-1 sudo[248352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:51 compute-1 sudo[248352]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:51.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:52.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:52 compute-1 nova_compute[228683]: 2025-11-25 10:07:52.276 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:52 compute-1 ceph-mon[79643]: pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:52 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:52 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:07:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:07:53 compute-1 ceph-mon[79643]: pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 536 B/s rd, 0 op/s
Nov 25 10:07:53 compute-1 ceph-mon[79643]: pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 649 B/s rd, 0 op/s
Nov 25 10:07:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:07:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:07:53 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:07:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:53.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:54.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1289407423' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:07:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1289407423' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:07:54 compute-1 podman[248408]: 2025-11-25 10:07:54.785950366 +0000 UTC m=+0.040464982 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 10:07:55 compute-1 nova_compute[228683]: 2025-11-25 10:07:55.053 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:55 compute-1 ceph-mon[79643]: pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 649 B/s rd, 0 op/s
Nov 25 10:07:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:55.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:56.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:56 compute-1 sudo[248426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:07:56 compute-1 sudo[248426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:56 compute-1 sudo[248426]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:57 compute-1 nova_compute[228683]: 2025-11-25 10:07:57.279 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:57 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:57 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:57 compute-1 ceph-mon[79643]: pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 649 B/s rd, 0 op/s
Nov 25 10:07:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:57.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:58.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:07:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:59.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:59 compute-1 ceph-mon[79643]: pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 649 B/s rd, 0 op/s
Nov 25 10:08:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:00.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:00 compute-1 nova_compute[228683]: 2025-11-25 10:08:00.053 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:08:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:01.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:01 compute-1 ceph-mon[79643]: pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 324 B/s rd, 0 op/s
Nov 25 10:08:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:02.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:02 compute-1 nova_compute[228683]: 2025-11-25 10:08:02.281 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:03.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:03 compute-1 ceph-mon[79643]: pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Nov 25 10:08:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:04.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:04 compute-1 sudo[248455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:08:04 compute-1 sudo[248455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:04 compute-1 sudo[248455]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:08:05.009 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:08:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:08:05.010 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:08:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:08:05.010 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:08:05 compute-1 nova_compute[228683]: 2025-11-25 10:08:05.054 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:05.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:05 compute-1 ceph-mon[79643]: pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:06.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:07 compute-1 nova_compute[228683]: 2025-11-25 10:08:07.284 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:08:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:07.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:08:07 compute-1 ceph-mon[79643]: pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:08.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:09.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:09 compute-1 ceph-mon[79643]: pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:10.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:10 compute-1 nova_compute[228683]: 2025-11-25 10:08:10.054 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:10 compute-1 podman[248483]: 2025-11-25 10:08:10.781981757 +0000 UTC m=+0.037348517 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:08:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:11.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:11 compute-1 ceph-mon[79643]: pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:08:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:12.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:08:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:12 compute-1 nova_compute[228683]: 2025-11-25 10:08:12.286 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:13.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:13 compute-1 ceph-mon[79643]: pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:08:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:14.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:08:15 compute-1 nova_compute[228683]: 2025-11-25 10:08:15.057 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:15.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:15 compute-1 ceph-mon[79643]: pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:08:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:16.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:17 compute-1 nova_compute[228683]: 2025-11-25 10:08:17.289 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:08:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:17.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:08:17 compute-1 ceph-mon[79643]: pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:18.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:19.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:19 compute-1 podman[248504]: 2025-11-25 10:08:19.80048623 +0000 UTC m=+0.055242995 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 10:08:19 compute-1 ceph-mon[79643]: pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:20.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:20 compute-1 nova_compute[228683]: 2025-11-25 10:08:20.058 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:21.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:21 compute-1 ceph-mon[79643]: pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:08:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:22.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:08:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:22 compute-1 nova_compute[228683]: 2025-11-25 10:08:22.292 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:23.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:23 compute-1 ceph-mon[79643]: pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:24.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:24 compute-1 sudo[248530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:08:24 compute-1 sudo[248530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:24 compute-1 sudo[248530]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:25 compute-1 nova_compute[228683]: 2025-11-25 10:08:25.059 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:08:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:25.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:08:25 compute-1 podman[248556]: 2025-11-25 10:08:25.785946198 +0000 UTC m=+0.040710263 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:08:25 compute-1 ceph-mon[79643]: pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:26.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:27 compute-1 nova_compute[228683]: 2025-11-25 10:08:27.295 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:27.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:27 compute-1 nova_compute[228683]: 2025-11-25 10:08:27.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:27 compute-1 nova_compute[228683]: 2025-11-25 10:08:27.913 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:08:27 compute-1 nova_compute[228683]: 2025-11-25 10:08:27.913 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:08:27 compute-1 nova_compute[228683]: 2025-11-25 10:08:27.913 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:08:27 compute-1 nova_compute[228683]: 2025-11-25 10:08:27.914 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:08:27 compute-1 nova_compute[228683]: 2025-11-25 10:08:27.914 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:08:27 compute-1 ceph-mon[79643]: pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:28.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:28 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:08:28 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2168162688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:28 compute-1 nova_compute[228683]: 2025-11-25 10:08:28.250 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:08:28 compute-1 nova_compute[228683]: 2025-11-25 10:08:28.447 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:08:28 compute-1 nova_compute[228683]: 2025-11-25 10:08:28.449 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4876MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:08:28 compute-1 nova_compute[228683]: 2025-11-25 10:08:28.449 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:08:28 compute-1 nova_compute[228683]: 2025-11-25 10:08:28.450 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:08:28 compute-1 nova_compute[228683]: 2025-11-25 10:08:28.699 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:08:28 compute-1 nova_compute[228683]: 2025-11-25 10:08:28.700 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:08:28 compute-1 nova_compute[228683]: 2025-11-25 10:08:28.803 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Refreshing inventories for resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 10:08:28 compute-1 nova_compute[228683]: 2025-11-25 10:08:28.893 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Updating ProviderTree inventory for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 10:08:28 compute-1 nova_compute[228683]: 2025-11-25 10:08:28.893 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Updating inventory in ProviderTree for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:08:28 compute-1 nova_compute[228683]: 2025-11-25 10:08:28.930 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Refreshing aggregate associations for resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 10:08:28 compute-1 nova_compute[228683]: 2025-11-25 10:08:28.950 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Refreshing trait associations for resource provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SHA,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX512VAES,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 10:08:28 compute-1 nova_compute[228683]: 2025-11-25 10:08:28.962 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:08:28 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2168162688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:29 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:08:29 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/690462983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:29 compute-1 nova_compute[228683]: 2025-11-25 10:08:29.294 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:08:29 compute-1 nova_compute[228683]: 2025-11-25 10:08:29.298 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:08:29 compute-1 nova_compute[228683]: 2025-11-25 10:08:29.311 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:08:29 compute-1 nova_compute[228683]: 2025-11-25 10:08:29.312 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:08:29 compute-1 nova_compute[228683]: 2025-11-25 10:08:29.313 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:08:29 compute-1 nova_compute[228683]: 2025-11-25 10:08:29.313 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:29.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:29 compute-1 ceph-mon[79643]: pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:29 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/690462983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:29 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:08:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:30.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:30 compute-1 nova_compute[228683]: 2025-11-25 10:08:30.060 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:31 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3313080343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:31 compute-1 nova_compute[228683]: 2025-11-25 10:08:31.321 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:31 compute-1 nova_compute[228683]: 2025-11-25 10:08:31.321 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:08:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:31.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:32 compute-1 ceph-mon[79643]: pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:32 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1314521822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:32.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:32 compute-1 nova_compute[228683]: 2025-11-25 10:08:32.298 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:32 compute-1 nova_compute[228683]: 2025-11-25 10:08:32.889 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:32 compute-1 nova_compute[228683]: 2025-11-25 10:08:32.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:33 compute-1 ceph-mon[79643]: pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:33.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:33 compute-1 nova_compute[228683]: 2025-11-25 10:08:33.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:33 compute-1 nova_compute[228683]: 2025-11-25 10:08:33.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:08:33 compute-1 nova_compute[228683]: 2025-11-25 10:08:33.895 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:08:33 compute-1 nova_compute[228683]: 2025-11-25 10:08:33.906 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:08:33 compute-1 nova_compute[228683]: 2025-11-25 10:08:33.906 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:34 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/35916024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:34.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:35 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/341417532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:35 compute-1 ceph-mon[79643]: pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:35 compute-1 nova_compute[228683]: 2025-11-25 10:08:35.061 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:08:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:35.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:08:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:36.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:37 compute-1 nova_compute[228683]: 2025-11-25 10:08:37.301 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:37.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:37 compute-1 ceph-mon[79643]: pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:37 compute-1 nova_compute[228683]: 2025-11-25 10:08:37.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:38.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:38 compute-1 nova_compute[228683]: 2025-11-25 10:08:38.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:38 compute-1 nova_compute[228683]: 2025-11-25 10:08:38.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:39.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:39 compute-1 ceph-mon[79643]: pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:39 compute-1 nova_compute[228683]: 2025-11-25 10:08:39.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:39 compute-1 nova_compute[228683]: 2025-11-25 10:08:39.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 10:08:39 compute-1 nova_compute[228683]: 2025-11-25 10:08:39.908 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 10:08:40 compute-1 nova_compute[228683]: 2025-11-25 10:08:40.061 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:40.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:40 compute-1 nova_compute[228683]: 2025-11-25 10:08:40.903 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:41.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:41 compute-1 podman[248625]: 2025-11-25 10:08:41.808887491 +0000 UTC m=+0.062428457 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:08:41 compute-1 ceph-mon[79643]: pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:42.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:42 compute-1 nova_compute[228683]: 2025-11-25 10:08:42.304 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:43.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:43 compute-1 ceph-mon[79643]: pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:08:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:44.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:08:44 compute-1 sudo[248643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:08:44 compute-1 sudo[248643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:44 compute-1 sudo[248643]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:45 compute-1 nova_compute[228683]: 2025-11-25 10:08:45.063 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:45.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:45 compute-1 nova_compute[228683]: 2025-11-25 10:08:45.895 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:45 compute-1 nova_compute[228683]: 2025-11-25 10:08:45.895 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 10:08:45 compute-1 ceph-mon[79643]: pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:08:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:46.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:47 compute-1 nova_compute[228683]: 2025-11-25 10:08:47.308 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:47.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:47 compute-1 ceph-mon[79643]: pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:48.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:49.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:49 compute-1 ceph-mon[79643]: pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:50 compute-1 nova_compute[228683]: 2025-11-25 10:08:50.065 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:50.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:50 compute-1 podman[248671]: 2025-11-25 10:08:50.801937527 +0000 UTC m=+0.057714644 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 10:08:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:51.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:51 compute-1 ceph-mon[79643]: pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:52.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:52 compute-1 nova_compute[228683]: 2025-11-25 10:08:52.311 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:53.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:53 compute-1 ceph-mon[79643]: pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.952026) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333952057, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2598, "num_deletes": 503, "total_data_size": 6377383, "memory_usage": 6493584, "flush_reason": "Manual Compaction"}
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333961139, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 4012324, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31685, "largest_seqno": 34278, "table_properties": {"data_size": 4002264, "index_size": 5914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 24263, "raw_average_key_size": 20, "raw_value_size": 3980130, "raw_average_value_size": 3300, "num_data_blocks": 254, "num_entries": 1206, "num_filter_entries": 1206, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764065125, "oldest_key_time": 1764065125, "file_creation_time": 1764065333, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 9135 microseconds, and 5803 cpu microseconds.
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.961166) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 4012324 bytes OK
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.961178) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.961509) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.961519) EVENT_LOG_v1 {"time_micros": 1764065333961516, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.961530) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 6364970, prev total WAL file size 6364970, number of live WAL files 2.
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.962383) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(3918KB)], [63(14MB)]
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333962427, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 18797237, "oldest_snapshot_seqno": -1}
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6334 keys, 12740339 bytes, temperature: kUnknown
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333992141, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 12740339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12699686, "index_size": 23711, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 165436, "raw_average_key_size": 26, "raw_value_size": 12586927, "raw_average_value_size": 1987, "num_data_blocks": 937, "num_entries": 6334, "num_filter_entries": 6334, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063177, "oldest_key_time": 0, "file_creation_time": 1764065333, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e6723030-6d80-4936-b19c-e97b87ba28bf", "db_session_id": "A0TFYK0291ZT1BVMXF7C", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.992279) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 12740339 bytes
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.992821) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 631.5 rd, 428.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 14.1 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(7.9) write-amplify(3.2) OK, records in: 7352, records dropped: 1018 output_compression: NoCompression
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.992833) EVENT_LOG_v1 {"time_micros": 1764065333992828, "job": 38, "event": "compaction_finished", "compaction_time_micros": 29764, "compaction_time_cpu_micros": 20588, "output_level": 6, "num_output_files": 1, "total_output_size": 12740339, "num_input_records": 7352, "num_output_records": 6334, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333993557, "job": 38, "event": "table_file_deletion", "file_number": 65}
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333995469, "job": 38, "event": "table_file_deletion", "file_number": 63}
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.962344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.995558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.995563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.995565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.995566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:08:53 compute-1 ceph-mon[79643]: rocksdb: (Original Log Time 2025/11/25-10:08:53.995567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:08:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:54.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:55 compute-1 nova_compute[228683]: 2025-11-25 10:08:55.065 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:55.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:55 compute-1 ceph-mon[79643]: pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:56.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:56 compute-1 sudo[248697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:08:56 compute-1 sudo[248697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:56 compute-1 sudo[248697]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:56 compute-1 sudo[248728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 10:08:56 compute-1 sudo[248728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:56 compute-1 podman[248721]: 2025-11-25 10:08:56.595935732 +0000 UTC m=+0.041438195 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 25 10:08:56 compute-1 podman[248820]: 2025-11-25 10:08:56.966010682 +0000 UTC m=+0.038561934 container exec 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:08:57 compute-1 podman[248820]: 2025-11-25 10:08:57.05671793 +0000 UTC m=+0.129269171 container exec_died 188b4764fe5a25dad5c35d181be1674f10d722836e0eba68f4c6c5acbf6becb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:08:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:57 compute-1 nova_compute[228683]: 2025-11-25 10:08:57.312 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:57 compute-1 podman[248930]: 2025-11-25 10:08:57.392933722 +0000 UTC m=+0.034787100 container exec 48c3be01eb68c77d87f12f950cadd5a9f0be42049d86ff37bececa6f3d988615 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:08:57 compute-1 podman[248930]: 2025-11-25 10:08:57.401559229 +0000 UTC m=+0.043412607 container exec_died 48c3be01eb68c77d87f12f950cadd5a9f0be42049d86ff37bececa6f3d988615 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:08:57 compute-1 podman[249025]: 2025-11-25 10:08:57.669533836 +0000 UTC m=+0.034764926 container exec 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 10:08:57 compute-1 podman[249043]: 2025-11-25 10:08:57.72454332 +0000 UTC m=+0.043892481 container exec_died 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 10:08:57 compute-1 podman[249025]: 2025-11-25 10:08:57.726801146 +0000 UTC m=+0.092032236 container exec_died 6e7f6bc632cdfb383cceceb37c7d8ffa9ef2a3fd191e40d66583f8b379e996c0 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-1-xlgqkq)
Nov 25 10:08:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:57.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:57 compute-1 podman[249076]: 2025-11-25 10:08:57.855194346 +0000 UTC m=+0.033441322 container exec 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.28.2, vcs-type=git, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, distribution-scope=public)
Nov 25 10:08:57 compute-1 podman[249076]: 2025-11-25 10:08:57.865582355 +0000 UTC m=+0.043829312 container exec_died 49a746aec1ec4677a2a223c2c3b9235f2e0e3a7878b4afd79c503ba6a6748f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-1-adsqcr, name=keepalived, vcs-type=git, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, architecture=x86_64, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9)
Nov 25 10:08:57 compute-1 sudo[248728]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:57 compute-1 sudo[249102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:08:57 compute-1 sudo[249102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:57 compute-1 sudo[249102]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:57 compute-1 ceph-mon[79643]: pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:57 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:08:57 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:08:57 compute-1 sudo[249127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:08:57 compute-1 sudo[249127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:58.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:58 compute-1 sudo[249127]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:08:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:08:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:08:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:08:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:08:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:08:58 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:08:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:08:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:59.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:59 compute-1 ceph-mon[79643]: pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 530 B/s rd, 0 op/s
Nov 25 10:08:59 compute-1 ceph-mon[79643]: Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 25 10:09:00 compute-1 nova_compute[228683]: 2025-11-25 10:09:00.065 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:00.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:09:01 compute-1 sudo[249183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:09:01 compute-1 sudo[249183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:09:01 compute-1 sudo[249183]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:01.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:01 compute-1 ceph-mon[79643]: pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 530 B/s rd, 0 op/s
Nov 25 10:09:01 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:09:01 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:09:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:02.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:02 compute-1 nova_compute[228683]: 2025-11-25 10:09:02.315 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:03.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:03 compute-1 ceph-mon[79643]: pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 25 10:09:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:04.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:04 compute-1 sudo[249209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:09:04 compute-1 sudo[249209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:09:04 compute-1 sudo[249209]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:09:05.011 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:09:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:09:05.011 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:09:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:09:05.011 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:09:05 compute-1 nova_compute[228683]: 2025-11-25 10:09:05.066 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:05.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:06 compute-1 ceph-mon[79643]: pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 796 B/s rd, 0 op/s
Nov 25 10:09:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:06.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:07 compute-1 nova_compute[228683]: 2025-11-25 10:09:07.318 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:07.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:08 compute-1 ceph-mon[79643]: pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 25 10:09:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:08.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:09.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:10 compute-1 ceph-mon[79643]: pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 796 B/s rd, 0 op/s
Nov 25 10:09:10 compute-1 nova_compute[228683]: 2025-11-25 10:09:10.068 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:10.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:11 compute-1 ceph-mon[79643]: pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:11.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:12.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:12 compute-1 nova_compute[228683]: 2025-11-25 10:09:12.321 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:12 compute-1 podman[249238]: 2025-11-25 10:09:12.789056547 +0000 UTC m=+0.040856809 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:09:13 compute-1 ceph-mon[79643]: pgmap v1144: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:09:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:13.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:14.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:15 compute-1 nova_compute[228683]: 2025-11-25 10:09:15.070 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:15 compute-1 ceph-mon[79643]: pgmap v1145: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:09:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:15.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:16.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:17 compute-1 nova_compute[228683]: 2025-11-25 10:09:17.325 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:17 compute-1 ceph-mon[79643]: pgmap v1146: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:17.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:18.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:19 compute-1 ceph-mon[79643]: pgmap v1147: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:19.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:20 compute-1 nova_compute[228683]: 2025-11-25 10:09:20.072 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:20.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:21 compute-1 ceph-mon[79643]: pgmap v1148: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:21 compute-1 podman[249259]: 2025-11-25 10:09:21.797931951 +0000 UTC m=+0.053028050 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 10:09:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:21.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:22 compute-1 nova_compute[228683]: 2025-11-25 10:09:22.100 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:22.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:22 compute-1 nova_compute[228683]: 2025-11-25 10:09:22.327 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:23 compute-1 ceph-mon[79643]: pgmap v1149: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:09:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:23.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:24.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:24 compute-1 sudo[249283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:09:24 compute-1 sudo[249283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:09:24 compute-1 sudo[249283]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:25 compute-1 nova_compute[228683]: 2025-11-25 10:09:25.072 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:25 compute-1 ceph-mon[79643]: pgmap v1150: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:25.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:26.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:26 compute-1 podman[249309]: 2025-11-25 10:09:26.787082873 +0000 UTC m=+0.041893112 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 10:09:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:27 compute-1 nova_compute[228683]: 2025-11-25 10:09:27.331 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:27 compute-1 ceph-mon[79643]: pgmap v1151: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:09:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:27.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:28.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:28 compute-1 nova_compute[228683]: 2025-11-25 10:09:28.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:28 compute-1 nova_compute[228683]: 2025-11-25 10:09:28.910 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:09:28 compute-1 nova_compute[228683]: 2025-11-25 10:09:28.910 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:09:28 compute-1 nova_compute[228683]: 2025-11-25 10:09:28.910 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:09:28 compute-1 nova_compute[228683]: 2025-11-25 10:09:28.910 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:09:28 compute-1 nova_compute[228683]: 2025-11-25 10:09:28.911 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:09:29 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:09:29 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2359044906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:29 compute-1 nova_compute[228683]: 2025-11-25 10:09:29.242 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:09:29 compute-1 nova_compute[228683]: 2025-11-25 10:09:29.441 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:09:29 compute-1 nova_compute[228683]: 2025-11-25 10:09:29.442 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4889MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:09:29 compute-1 nova_compute[228683]: 2025-11-25 10:09:29.442 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:09:29 compute-1 nova_compute[228683]: 2025-11-25 10:09:29.442 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:09:29 compute-1 nova_compute[228683]: 2025-11-25 10:09:29.489 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:09:29 compute-1 nova_compute[228683]: 2025-11-25 10:09:29.490 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:09:29 compute-1 nova_compute[228683]: 2025-11-25 10:09:29.502 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:09:29 compute-1 ceph-mon[79643]: pgmap v1152: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:29 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2359044906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:09:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:29.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:09:29 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:09:29 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/906682007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:29 compute-1 nova_compute[228683]: 2025-11-25 10:09:29.839 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:09:29 compute-1 nova_compute[228683]: 2025-11-25 10:09:29.843 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:09:29 compute-1 nova_compute[228683]: 2025-11-25 10:09:29.856 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:09:29 compute-1 nova_compute[228683]: 2025-11-25 10:09:29.857 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:09:29 compute-1 nova_compute[228683]: 2025-11-25 10:09:29.857 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:09:30 compute-1 nova_compute[228683]: 2025-11-25 10:09:30.074 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:30.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:30 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/906682007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:09:30 compute-1 nova_compute[228683]: 2025-11-25 10:09:30.857 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:30 compute-1 nova_compute[228683]: 2025-11-25 10:09:30.858 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:09:31 compute-1 ceph-mon[79643]: pgmap v1153: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:31.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:32.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:32 compute-1 nova_compute[228683]: 2025-11-25 10:09:32.334 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:32 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3117030415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:32 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2986840769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:32 compute-1 nova_compute[228683]: 2025-11-25 10:09:32.890 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:33 compute-1 ceph-mon[79643]: pgmap v1154: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:09:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:33.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:34.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:34 compute-1 nova_compute[228683]: 2025-11-25 10:09:34.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:35 compute-1 nova_compute[228683]: 2025-11-25 10:09:35.076 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:35 compute-1 ceph-mon[79643]: pgmap v1155: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:35 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1967994406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:35 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:09:35 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2342457185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:35.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:35 compute-1 nova_compute[228683]: 2025-11-25 10:09:35.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:35 compute-1 nova_compute[228683]: 2025-11-25 10:09:35.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:09:35 compute-1 nova_compute[228683]: 2025-11-25 10:09:35.895 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:09:35 compute-1 nova_compute[228683]: 2025-11-25 10:09:35.905 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:09:35 compute-1 nova_compute[228683]: 2025-11-25 10:09:35.906 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:36.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:36 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2342457185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:37 compute-1 nova_compute[228683]: 2025-11-25 10:09:37.336 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:37 compute-1 ceph-mon[79643]: pgmap v1156: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:37.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:37 compute-1 nova_compute[228683]: 2025-11-25 10:09:37.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:38.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:39 compute-1 ceph-mon[79643]: pgmap v1157: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:39.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:39 compute-1 nova_compute[228683]: 2025-11-25 10:09:39.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:39 compute-1 nova_compute[228683]: 2025-11-25 10:09:39.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:40 compute-1 nova_compute[228683]: 2025-11-25 10:09:40.078 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:40.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:41 compute-1 ceph-mon[79643]: pgmap v1158: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:41.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:42.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:42 compute-1 nova_compute[228683]: 2025-11-25 10:09:42.340 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:43 compute-1 ceph-mon[79643]: pgmap v1159: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:43 compute-1 podman[249380]: 2025-11-25 10:09:43.797197443 +0000 UTC m=+0.041631680 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 10:09:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:43.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:09:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:44.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:09:44 compute-1 sudo[249396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:09:44 compute-1 sudo[249396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:09:44 compute-1 sudo[249396]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:45 compute-1 nova_compute[228683]: 2025-11-25 10:09:45.082 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:45 compute-1 ceph-mon[79643]: pgmap v1160: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:09:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:45.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.003000029s ======
Nov 25 10:09:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:46.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000029s
Nov 25 10:09:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:47 compute-1 nova_compute[228683]: 2025-11-25 10:09:47.343 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:47 compute-1 ceph-mon[79643]: pgmap v1161: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:47.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:48.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:49 compute-1 ceph-mon[79643]: pgmap v1162: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:49.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:50 compute-1 nova_compute[228683]: 2025-11-25 10:09:50.082 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:50.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:51 compute-1 ceph-mon[79643]: pgmap v1163: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:51.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:52.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:52 compute-1 nova_compute[228683]: 2025-11-25 10:09:52.347 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:52 compute-1 podman[249425]: 2025-11-25 10:09:52.823954632 +0000 UTC m=+0.072095829 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 25 10:09:53 compute-1 ceph-mon[79643]: pgmap v1164: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:53.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 10:09:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1241217115' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:09:53 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 10:09:53 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1241217115' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:09:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:54.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1241217115' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:09:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1241217115' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:09:55 compute-1 nova_compute[228683]: 2025-11-25 10:09:55.084 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:55 compute-1 ceph-mon[79643]: pgmap v1165: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:55.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:56.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:57 compute-1 nova_compute[228683]: 2025-11-25 10:09:57.350 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:57 compute-1 ceph-mon[79643]: pgmap v1166: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:57 compute-1 podman[249451]: 2025-11-25 10:09:57.806051277 +0000 UTC m=+0.042996351 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 10:09:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:09:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:57.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:09:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:58.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:59 compute-1 ceph-mon[79643]: pgmap v1167: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:09:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:59.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:00 compute-1 nova_compute[228683]: 2025-11-25 10:10:00.086 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:00.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:10:00 compute-1 ceph-mon[79643]: Health detail: HEALTH_WARN 2 failed cephadm daemon(s)
Nov 25 10:10:00 compute-1 ceph-mon[79643]: [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Nov 25 10:10:00 compute-1 ceph-mon[79643]:     daemon nfs.cephfs.2.0.compute-0.rychik on compute-0 is in error state
Nov 25 10:10:00 compute-1 ceph-mon[79643]:     daemon nfs.cephfs.0.0.compute-1.yfzsxe on compute-1 is in error state
Nov 25 10:10:01 compute-1 ceph-mon[79643]: pgmap v1168: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:01.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:01 compute-1 sudo[249470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:10:01 compute-1 sudo[249470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:01 compute-1 sudo[249470]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:01 compute-1 sudo[249495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:10:01 compute-1 sudo[249495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.002000019s ======
Nov 25 10:10:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:02.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000019s
Nov 25 10:10:02 compute-1 sudo[249495]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:02 compute-1 nova_compute[228683]: 2025-11-25 10:10:02.352 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:10:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:10:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:10:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:10:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:10:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:10:02 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:10:03 compute-1 ceph-mon[79643]: pgmap v1169: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Nov 25 10:10:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:03.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:04.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:04 compute-1 sudo[249550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:10:04 compute-1 sudo[249550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:04 compute-1 sudo[249550]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:10:05.013 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:10:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:10:05.013 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:10:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:10:05.013 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:10:05 compute-1 nova_compute[228683]: 2025-11-25 10:10:05.088 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:05 compute-1 sudo[249576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:10:05 compute-1 sudo[249576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:05 compute-1 sudo[249576]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:05 compute-1 ceph-mon[79643]: pgmap v1170: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Nov 25 10:10:05 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:10:05 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:10:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:05.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:06.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:07 compute-1 nova_compute[228683]: 2025-11-25 10:10:07.355 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:07 compute-1 ceph-mon[79643]: pgmap v1171: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Nov 25 10:10:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:10:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:07.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:10:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:08.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:09 compute-1 ceph-mon[79643]: pgmap v1172: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Nov 25 10:10:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:09.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:10 compute-1 nova_compute[228683]: 2025-11-25 10:10:10.090 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:10.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:11 compute-1 ceph-mon[79643]: pgmap v1173: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Nov 25 10:10:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:11.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:10:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:12.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:10:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:12 compute-1 nova_compute[228683]: 2025-11-25 10:10:12.359 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:13 compute-1 ceph-mon[79643]: pgmap v1174: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Nov 25 10:10:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:13.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:14.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:14 compute-1 podman[249605]: 2025-11-25 10:10:14.81398626 +0000 UTC m=+0.055626659 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 10:10:15 compute-1 nova_compute[228683]: 2025-11-25 10:10:15.094 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:15 compute-1 ceph-mon[79643]: pgmap v1175: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:10:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:15.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:16.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:17 compute-1 nova_compute[228683]: 2025-11-25 10:10:17.363 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:17 compute-1 ceph-mon[79643]: pgmap v1176: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:17.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:18.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:19 compute-1 ceph-mon[79643]: pgmap v1177: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:19.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:20 compute-1 nova_compute[228683]: 2025-11-25 10:10:20.094 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:20.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:21 compute-1 ceph-mon[79643]: pgmap v1178: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:21.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:22.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:22 compute-1 nova_compute[228683]: 2025-11-25 10:10:22.366 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:23 compute-1 ceph-mon[79643]: pgmap v1179: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:23 compute-1 podman[249626]: 2025-11-25 10:10:23.810471961 +0000 UTC m=+0.064904975 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 10:10:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:23.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:24.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:24 compute-1 sudo[249649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:10:24 compute-1 sudo[249649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:24 compute-1 sudo[249649]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:25 compute-1 nova_compute[228683]: 2025-11-25 10:10:25.097 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:25 compute-1 ceph-mon[79643]: pgmap v1180: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:25 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:25 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:25 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:25.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:26 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:26 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:26 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:26.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:27 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:27 compute-1 nova_compute[228683]: 2025-11-25 10:10:27.370 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:27 compute-1 ceph-mon[79643]: pgmap v1181: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:27 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:27 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:10:27 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:27.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:10:28 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:28 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:28 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:28.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:28 compute-1 podman[249676]: 2025-11-25 10:10:28.791161725 +0000 UTC m=+0.044391212 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 10:10:28 compute-1 nova_compute[228683]: 2025-11-25 10:10:28.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:28 compute-1 nova_compute[228683]: 2025-11-25 10:10:28.914 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:10:28 compute-1 nova_compute[228683]: 2025-11-25 10:10:28.915 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:10:28 compute-1 nova_compute[228683]: 2025-11-25 10:10:28.916 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:10:28 compute-1 nova_compute[228683]: 2025-11-25 10:10:28.916 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:10:28 compute-1 nova_compute[228683]: 2025-11-25 10:10:28.916 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:10:29 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:10:29 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1933998820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:29 compute-1 nova_compute[228683]: 2025-11-25 10:10:29.263 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:10:29 compute-1 nova_compute[228683]: 2025-11-25 10:10:29.510 228687 WARNING nova.virt.libvirt.driver [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:10:29 compute-1 nova_compute[228683]: 2025-11-25 10:10:29.511 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4898MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:10:29 compute-1 nova_compute[228683]: 2025-11-25 10:10:29.512 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:10:29 compute-1 nova_compute[228683]: 2025-11-25 10:10:29.512 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:10:29 compute-1 nova_compute[228683]: 2025-11-25 10:10:29.561 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:10:29 compute-1 nova_compute[228683]: 2025-11-25 10:10:29.562 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:10:29 compute-1 nova_compute[228683]: 2025-11-25 10:10:29.576 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:10:29 compute-1 ceph-mon[79643]: pgmap v1182: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:29 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1933998820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:29 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:29 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:29 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:29.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:29 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:10:29 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1706743548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:29 compute-1 nova_compute[228683]: 2025-11-25 10:10:29.926 228687 DEBUG oslo_concurrency.processutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:10:29 compute-1 nova_compute[228683]: 2025-11-25 10:10:29.932 228687 DEBUG nova.compute.provider_tree [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed in ProviderTree for provider: 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:10:29 compute-1 nova_compute[228683]: 2025-11-25 10:10:29.950 228687 DEBUG nova.scheduler.client.report [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Inventory has not changed for provider 3a2a2bfb-a742-4ed2-bc60-b156bcbd0ff7 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:10:29 compute-1 nova_compute[228683]: 2025-11-25 10:10:29.951 228687 DEBUG nova.compute.resource_tracker [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:10:29 compute-1 nova_compute[228683]: 2025-11-25 10:10:29.952 228687 DEBUG oslo_concurrency.lockutils [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:10:30 compute-1 nova_compute[228683]: 2025-11-25 10:10:30.100 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:30 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:30 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:30 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:30.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:30 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1706743548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:30 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:10:30 compute-1 nova_compute[228683]: 2025-11-25 10:10:30.952 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:30 compute-1 nova_compute[228683]: 2025-11-25 10:10:30.953 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:10:31 compute-1 ceph-mon[79643]: pgmap v1183: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:31 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:31 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:10:31 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:31.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:10:32 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:32 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:32 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:32.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:32 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:32 compute-1 nova_compute[228683]: 2025-11-25 10:10:32.372 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:33 compute-1 ceph-mon[79643]: pgmap v1184: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:33 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/623444006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:33 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:33 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:33 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:33.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:34 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:34 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:34 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:34.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:34 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3867604374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:34 compute-1 nova_compute[228683]: 2025-11-25 10:10:34.890 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:34 compute-1 nova_compute[228683]: 2025-11-25 10:10:34.893 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:35 compute-1 nova_compute[228683]: 2025-11-25 10:10:35.100 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:35 compute-1 ceph-mon[79643]: pgmap v1185: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:35 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:35 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:10:35 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:35.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:10:35 compute-1 nova_compute[228683]: 2025-11-25 10:10:35.894 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:35 compute-1 nova_compute[228683]: 2025-11-25 10:10:35.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:10:35 compute-1 nova_compute[228683]: 2025-11-25 10:10:35.894 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:10:35 compute-1 nova_compute[228683]: 2025-11-25 10:10:35.906 228687 DEBUG nova.compute.manager [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:10:35 compute-1 nova_compute[228683]: 2025-11-25 10:10:35.906 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:36 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:36 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:36 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:36.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:36 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2033946624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:37 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:37 compute-1 nova_compute[228683]: 2025-11-25 10:10:37.375 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:37 compute-1 ceph-mon[79643]: pgmap v1186: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:10:37 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1927464700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:37 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:37 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:37 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:37.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:38 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:38 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:38 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:38.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:38 compute-1 nova_compute[228683]: 2025-11-25 10:10:38.895 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:39 compute-1 ceph-mon[79643]: pgmap v1187: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:39 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:39 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:39 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:39.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:40 compute-1 nova_compute[228683]: 2025-11-25 10:10:40.101 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:40 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:40 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:40 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:40.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:40 compute-1 nova_compute[228683]: 2025-11-25 10:10:40.891 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:40 compute-1 nova_compute[228683]: 2025-11-25 10:10:40.905 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:41 compute-1 ceph-mon[79643]: pgmap v1188: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:41 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:41 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:41 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:41.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:41 compute-1 nova_compute[228683]: 2025-11-25 10:10:41.895 228687 DEBUG oslo_service.periodic_task [None req-8aa38c44-ff1a-49ef-b6ac-ccc1001fe2e2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:42 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:42 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:42 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:42.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:42 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:42 compute-1 nova_compute[228683]: 2025-11-25 10:10:42.377 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:43 compute-1 ceph-mon[79643]: pgmap v1189: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:10:43 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:43 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:43 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:43.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:44 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:44 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:44 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:44.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:44 compute-1 sudo[249745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:10:44 compute-1 sudo[249745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:44 compute-1 sudo[249745]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:45 compute-1 nova_compute[228683]: 2025-11-25 10:10:45.104 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:45 compute-1 podman[249771]: 2025-11-25 10:10:45.792089642 +0000 UTC m=+0.045706330 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 10:10:45 compute-1 ceph-mon[79643]: pgmap v1190: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:45 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:10:45 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:45 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:45 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:45.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:46 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:46 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:46 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:46.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:47 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:47 compute-1 nova_compute[228683]: 2025-11-25 10:10:47.380 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:47 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:47 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:47 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:47.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:47 compute-1 ceph-mon[79643]: pgmap v1191: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:10:48 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:48 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:48 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:48.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:49 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:49 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:49 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:49.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:49 compute-1 ceph-mon[79643]: pgmap v1192: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:50 compute-1 nova_compute[228683]: 2025-11-25 10:10:50.107 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:50 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:50 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:50 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:50.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:51 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:51 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:51 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:51.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:51 compute-1 ceph-mon[79643]: pgmap v1193: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:52 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:52 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:52 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:52.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:52 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:52 compute-1 nova_compute[228683]: 2025-11-25 10:10:52.384 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:53 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:53 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:53 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:53.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:53 compute-1 ceph-mon[79643]: pgmap v1194: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:54 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:54 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:54 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:54.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:54 compute-1 podman[249791]: 2025-11-25 10:10:54.803385882 +0000 UTC m=+0.056699421 container health_status b16bbaab796ae1af761ced1257a2934184abb13dcc76f9d8330d4d8f32abe7d1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 10:10:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1051001646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:10:54 compute-1 ceph-mon[79643]: from='client.? 192.168.122.10:0/1051001646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:10:55 compute-1 nova_compute[228683]: 2025-11-25 10:10:55.107 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:55 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:55 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:55 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:55.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:55 compute-1 ceph-mon[79643]: pgmap v1195: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:56 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:56 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:56 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:56.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:57 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:57 compute-1 nova_compute[228683]: 2025-11-25 10:10:57.386 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:57 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:57 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:57 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:57.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:57 compute-1 ceph-mon[79643]: pgmap v1196: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:58 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:58 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:58 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:58.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:59 compute-1 podman[249817]: 2025-11-25 10:10:59.815880732 +0000 UTC m=+0.071130861 container health_status 088298a790312dd86f7bd2b4a310325aa39a95e41d3b6ddb12228f0cc79983dc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Nov 25 10:10:59 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:10:59 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:59 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:59.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:59 compute-1 ceph-mon[79643]: pgmap v1197: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:11:00 compute-1 nova_compute[228683]: 2025-11-25 10:11:00.108 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:00 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:00 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:00 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:00.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:00 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:11:01 compute-1 sshd-session[249835]: Accepted publickey for zuul from 192.168.122.10 port 43274 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 10:11:01 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:01 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:01 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:01.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:01 compute-1 systemd-logind[746]: New session 56 of user zuul.
Nov 25 10:11:01 compute-1 systemd[1]: Started Session 56 of User zuul.
Nov 25 10:11:01 compute-1 sshd-session[249835]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:11:01 compute-1 ceph-mon[79643]: pgmap v1198: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:11:02 compute-1 sudo[249839]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 25 10:11:02 compute-1 sudo[249839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:02 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:02 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:02 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:02.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:02 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:11:02 compute-1 nova_compute[228683]: 2025-11-25 10:11:02.389 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:03 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:03 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:03 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:03.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:03 compute-1 ceph-mon[79643]: pgmap v1199: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:11:04 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:04 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:04 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:04.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:04 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 25 10:11:04 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1855548595' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:11:04 compute-1 sudo[250050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:11:04 compute-1 sudo[250050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:04 compute-1 sudo[250050]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:04 compute-1 ceph-mon[79643]: from='client.18864 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:04 compute-1 ceph-mon[79643]: from='client.28595 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:04 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/192645614' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:11:04 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1855548595' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:11:04 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1775628257' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:11:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:11:05.014 142940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:11:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:11:05.015 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:11:05 compute-1 ovn_metadata_agent[142935]: 2025-11-25 10:11:05.015 142940 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:11:05 compute-1 nova_compute[228683]: 2025-11-25 10:11:05.109 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:05 compute-1 sudo[250139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:11:05 compute-1 sudo[250139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:05 compute-1 sudo[250139]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:05 compute-1 sudo[250164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:11:05 compute-1 sudo[250164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:05 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:05 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:11:05 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:05.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:11:05 compute-1 ceph-mon[79643]: from='client.28480 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:05 compute-1 ceph-mon[79643]: from='client.18879 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:05 compute-1 ceph-mon[79643]: from='client.28607 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:05 compute-1 ceph-mon[79643]: pgmap v1200: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:11:05 compute-1 ceph-mon[79643]: from='client.28495 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:06 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:06 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:06 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:06.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:06 compute-1 sudo[250164]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:07 compute-1 ovs-vsctl[250247]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 10:11:07 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:11:07 compute-1 nova_compute[228683]: 2025-11-25 10:11:07.393 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:07 compute-1 virtqemud[228099]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 10:11:07 compute-1 virtqemud[228099]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 10:11:07 compute-1 virtqemud[228099]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 10:11:07 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:07 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:07 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:07.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:08 compute-1 ceph-mon[79643]: pgmap v1201: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:11:08 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:08 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:08 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:08.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:08 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: cache status {prefix=cache status} (starting...)
Nov 25 10:11:08 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:11:08 compute-1 lvm[250559]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:11:08 compute-1 lvm[250559]: VG ceph_vg0 finished
Nov 25 10:11:08 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: client ls {prefix=client ls} (starting...)
Nov 25 10:11:08 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:11:08 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: damage ls {prefix=damage ls} (starting...)
Nov 25 10:11:08 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:11:09 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 25 10:11:09 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4041974016' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:11:09 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: dump loads {prefix=dump loads} (starting...)
Nov 25 10:11:09 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:11:09 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 25 10:11:09 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:11:09 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:09 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:09 compute-1 ceph-mon[79643]: pgmap v1202: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:11:09 compute-1 ceph-mon[79643]: from='client.28628 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:09 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:11:09 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:11:09 compute-1 ceph-mon[79643]: pgmap v1203: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:09 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:09 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:09 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:11:09 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:11:09 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:11:09 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/4041974016' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:11:09 compute-1 ceph-mon[79643]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:11:09 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 25 10:11:09 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:11:09 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 25 10:11:09 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3774019563' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:11:09 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:11:09 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4051379379' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:11:09 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 25 10:11:09 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:11:09 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 25 10:11:09 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:11:09 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 25 10:11:09 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/698441370' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:11:09 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 25 10:11:09 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:11:09 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 25 10:11:09 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:11:09 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:09 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:09 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:09.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:09 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 25 10:11:09 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2712726419' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 25 10:11:10 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2520612999' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: ops {prefix=ops} (starting...)
Nov 25 10:11:10 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:11:10 compute-1 nova_compute[228683]: 2025-11-25 10:11:10.110 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:10 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 25 10:11:10 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/605611227' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.18912 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.28528 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.28534 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1364061829' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3774019563' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/4051379379' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.28673 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.18942 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.28679 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2335620262' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2787533441' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/698441370' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.28703 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.28709 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.18978 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2182540642' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2712726419' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2520612999' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 10:11:10 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:10 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:10 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:10.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:10 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 25 10:11:10 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2915980700' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:10 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: session ls {prefix=session ls} (starting...)
Nov 25 10:11:10 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas Can't run that command on an inactive MDS!
Nov 25 10:11:10 compute-1 ceph-mds[85218]: mds.cephfs.compute-1.knpqas asok_command: status {prefix=status} (starting...)
Nov 25 10:11:11 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 25 10:11:11 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1167894320' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/605611227' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.28591 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.19002 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2543856292' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.28760 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2624300683' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2915980700' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3634701221' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.28615 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.28621 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.19041 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3543896602' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: pgmap v1204: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/62713454' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/443310296' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.28645 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1167894320' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 25 10:11:11 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1010831103' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 10:11:11 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2801456606' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:11 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 25 10:11:11 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3785103193' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:11:11 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:11 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:11 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:11.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:11 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 25 10:11:11 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3375895743' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 25 10:11:12 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3917255956' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.19056 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2725417152' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/4244050022' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1938353341' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1401436123' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1010831103' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1772863110' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2801456606' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3785103193' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/814928782' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.28859 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4106823401' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3375895743' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/571812981' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/699907023' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:11:12 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:12 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:12 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:12.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:11:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 25 10:11:12 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2000504221' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 25 10:11:12 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1958780118' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:12 compute-1 nova_compute[228683]: 2025-11-25 10:11:12.394 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 25 10:11:12 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1657872299' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:11:12 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 25 10:11:12 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2972446715' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 25 10:11:13 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997004426' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3917255956' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.28883 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/694499281' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2000504221' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1958780118' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1657872299' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1898159654' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.19149 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2554816375' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.28928 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3866070346' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: pgmap v1205: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2761747961' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2972446715' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.28949 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.28964 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/997004426' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2399510507' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 25 10:11:13 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/595608251' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:32.931664+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 581632 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:33.931849+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 573440 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:34.932001+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 573440 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:35.932142+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 573440 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:36.932241+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 565248 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:37.932366+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 565248 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:38.932478+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 557056 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:39.932574+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 557056 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:40.932693+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 557056 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:41.932801+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 548864 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:42.932960+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 548864 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:43.933108+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 540672 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:44.933281+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 540672 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:45.933405+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 532480 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:46.933544+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 532480 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:47.933686+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 524288 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:48.933808+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 524288 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912330 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:49.933903+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 62.074703217s of 62.080329895s, submitted: 4
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 516096 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:50.934039+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 507904 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:51.934138+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:52.934263+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 507904 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:53.934394+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 499712 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:54.934512+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 499712 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:55.934672+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 499712 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:56.935303+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 483328 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:57.935447+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 483328 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:58.935593+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 475136 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:59.935745+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 475136 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:00.936118+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 466944 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:01.936216+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 466944 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:02.936318+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 466944 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:03.936441+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 458752 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:04.936576+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 458752 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:05.936700+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 450560 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:06.936824+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 450560 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:07.936980+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 450560 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:08.937137+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 442368 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:09.937275+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 442368 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:10.937403+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 434176 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:11.937517+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 434176 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:12.937692+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 434176 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:13.937808+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 425984 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:14.937934+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 425984 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:15.938036+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 425984 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:16.938135+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 417792 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:17.938250+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 409600 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:18.938354+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 409600 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:19.938457+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 409600 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:20.938553+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 401408 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:21.938722+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 401408 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:22.938910+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 401408 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:23.939028+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 393216 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:24.939156+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 393216 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:25.939285+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 385024 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:26.939455+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 385024 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:27.939566+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 376832 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:28.939674+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 376832 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:29.939784+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 376832 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:30.939922+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 368640 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:31.940057+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 368640 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:32.940205+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 360448 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:33.940329+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 360448 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:34.940502+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 352256 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:35.940671+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 352256 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:36.940788+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 352256 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:37.940950+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 344064 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:38.941066+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 344064 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:39.941163+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 335872 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:40.941284+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 335872 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:41.941443+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 327680 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:42.941585+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 327680 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:43.941737+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 327680 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:44.941854+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 319488 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:45.941978+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 319488 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:46.942457+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 319488 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:47.942586+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 311296 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:48.942694+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 311296 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:49.942794+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 303104 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:50.942898+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 294912 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:51.943009+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 286720 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:52.943159+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 286720 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:53.943278+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 286720 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:54.943394+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 278528 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:55.943452+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 278528 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:56.943551+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 278528 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:57.943706+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 270336 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:58.943808+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 270336 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:59.943922+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 262144 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:00.944015+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 262144 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:01.944115+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 253952 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:02.944481+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 253952 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d2821000 session 0x5584d46b1c20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:03.944595+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 253952 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:04.944711+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 245760 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:05.944814+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 245760 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:06.944926+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 245760 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:07.945025+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 237568 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:08.945137+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 237568 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:09.945284+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 229376 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:10.945446+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 229376 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:11.945586+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 229376 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:12.945728+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 221184 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:13.945866+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 221184 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:14.946014+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 212992 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911148 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:15.946171+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 212992 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:16.946277+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 212992 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 87.239952087s of 87.242134094s, submitted: 2
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:17.946385+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 204800 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:18.946450+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 204800 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:19.946555+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 196608 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:20.946665+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 180224 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:21.946801+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 172032 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:22.946961+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 172032 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:23.947068+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 172032 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:24.947210+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 163840 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:25.947313+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 163840 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:26.947417+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 163840 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:27.947554+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 155648 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:28.947668+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 155648 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:29.947771+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 147456 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:30.947878+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 147456 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:31.947974+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 139264 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:32.948110+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 139264 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:33.948213+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 139264 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:34.948318+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 131072 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:35.948451+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 131072 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:36.948549+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 131072 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:37.948655+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 122880 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:38.948759+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 122880 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:39.948864+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 114688 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:40.948977+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 114688 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:41.949093+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 114688 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:42.949215+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 106496 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:43.949376+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 106496 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:44.949539+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 98304 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:45.949662+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 98304 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:46.949797+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 98304 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:47.949910+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 90112 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:48.950028+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 90112 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:49.950134+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 81920 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:50.950246+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 81920 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:51.950372+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 81920 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:52.950490+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 73728 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:53.950596+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 73728 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:54.950695+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 65536 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:55.950836+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 65536 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:56.950968+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 65536 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:57.951129+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 57344 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:58.951227+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 57344 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:59.951321+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 49152 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:00.951439+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 49152 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:01.951532+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 49152 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:02.951656+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 40960 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:03.951763+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 40960 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:04.951866+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 32768 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912660 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:05.951968+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 32768 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:06.952063+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 32768 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:07.952166+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 24576 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 51.442672729s of 51.444469452s, submitted: 1
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:08.952259+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 24576 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:09.952353+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 16384 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:10.952521+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 16384 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:11.952627+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 8192 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:12.952781+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 8192 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:13.952891+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 8192 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:14.952996+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 0 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:15.953095+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 0 heap: 77553664 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:16.953195+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 1040384 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:17.953312+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 1040384 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:18.953462+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 1040384 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:19.953572+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 1032192 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:20.953678+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 1032192 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:21.953784+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d2820800 session 0x5584d47974a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1024000 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:22.953902+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1024000 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:23.954006+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1015808 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:24.954104+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1015808 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:25.954222+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 1007616 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:26.954317+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 1007616 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:27.954436+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 1007616 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:28.954566+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 999424 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:29.954696+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 999424 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:30.954786+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 999424 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:31.954875+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 991232 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:32.954990+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 991232 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:33.955100+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 983040 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:34.955583+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 983040 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:35.955719+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:36.955844+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:37.955965+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:38.956127+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 966656 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:39.956239+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 966656 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:40.956341+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 966656 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:41.956463+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 958464 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:42.956593+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 958464 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:43.956692+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 950272 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:44.956785+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 950272 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:45.956939+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 942080 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:46.957051+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 942080 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:47.957162+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 942080 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:48.957260+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 925696 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:49.957357+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 925696 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:50.957486+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 917504 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:51.957593+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 917504 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:52.957713+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 917504 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:53.957828+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 909312 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:54.957942+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 909312 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:55.958059+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 901120 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:56.958173+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 901120 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:57.958282+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 901120 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 6386 writes, 26K keys, 6386 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6386 writes, 1197 syncs, 5.34 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6386 writes, 26K keys, 6386 commit groups, 1.0 writes per commit group, ingest: 19.54 MB, 0.03 MB/s
                                           Interval WAL: 6386 writes, 1197 syncs, 5.34 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:58.958391+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 811008 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:59.958461+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 811008 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:00.958563+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 811008 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:01.958663+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 802816 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:02.958778+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 802816 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:03.958880+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 794624 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:04.958990+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 794624 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:05.959089+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 786432 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:06.959185+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 786432 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:07.959288+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 786432 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:08.959471+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 778240 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:09.959825+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 778240 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:10.959933+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 770048 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:11.960256+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 761856 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:12.960384+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 761856 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:13.960494+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 761856 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:14.960604+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 753664 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:15.960706+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 753664 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:16.960905+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 745472 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:17.961071+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 745472 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:18.961231+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:19.961352+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:20.961471+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:21.961610+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:22.961741+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:23.961866+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:24.961972+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 720896 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:25.962103+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 720896 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:26.962198+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 712704 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:27.962314+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 712704 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:28.962448+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 704512 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:29.962555+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 704512 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:30.962722+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 704512 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:31.962866+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 696320 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:32.963021+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 696320 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:33.963137+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 688128 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:34.963266+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 688128 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:35.963544+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 679936 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:36.963677+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 679936 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:37.963859+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 679936 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:38.964018+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 679936 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:39.964186+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:40.964355+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:41.964491+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:42.964631+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:43.964802+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:44.964922+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:45.965052+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:46.965219+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:47.965343+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:48.965452+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:49.965558+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 638976 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:50.965659+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 638976 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:51.965757+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 638976 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:52.965868+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:53.965981+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:54.966099+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:55.966211+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:56.966322+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:57.966449+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:58.966557+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:59.966662+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:00.966760+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:01.966856+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:02.966964+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:03.967065+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:04.967154+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:05.967250+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 573440 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:06.967381+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 573440 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 119.153465271s of 119.154769897s, submitted: 1
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:07.967433+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:08.967550+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 1294336 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:09.968524+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 1294336 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:10.968691+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 1294336 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:11.968857+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 1294336 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:12.969039+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 1294336 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:13.969153+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 1286144 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:14.969328+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 1253376 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:15.969451+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 1253376 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:16.969557+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 1253376 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:17.969649+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1245184 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:18.969772+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1245184 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:19.969876+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1245184 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:20.969969+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1245184 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:21.970070+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1245184 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:22.970191+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1245184 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:23.970294+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 1236992 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:24.970396+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 1236992 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:25.970537+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 1228800 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:26.970687+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 1228800 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:27.970829+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 1228800 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:28.970959+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 1220608 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:29.971111+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 1220608 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:30.971253+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 1212416 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:31.971406+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 1204224 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:32.971600+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 1204224 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:33.971748+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 1204224 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:34.971892+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 1196032 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:35.972041+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 1196032 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:36.972215+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 1179648 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:37.972341+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 1179648 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:38.972476+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 1179648 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:39.972618+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 1171456 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:40.972758+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 1171456 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:41.972899+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 1163264 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:42.973060+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 1163264 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:43.973221+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 1155072 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:44.973389+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 1155072 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:45.973553+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 1155072 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:46.973712+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 1146880 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:47.973826+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 1146880 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:48.973931+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 1138688 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:49.974032+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 1138688 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:50.974173+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 1138688 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:51.974346+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 1130496 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:52.974471+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 1122304 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:53.974599+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 1114112 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:54.974742+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 1114112 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:55.974837+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 1114112 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:56.974939+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1105920 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:57.975039+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1105920 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:58.975151+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1105920 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:59.975299+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1105920 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:00.975406+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1105920 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:01.975550+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1105920 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:02.975664+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1105920 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:03.975860+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 1089536 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:04.975979+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 1089536 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:05.976133+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 1081344 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:06.976233+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 1081344 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:07.976366+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 1081344 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:08.976452+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 1081344 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:09.976566+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 1081344 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:10.976672+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 1073152 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:11.976777+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 1073152 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:12.976913+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 1073152 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:13.977013+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 1073152 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:14.977141+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 1073152 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:15.977303+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1056768 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:16.977442+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1056768 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:17.977538+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1056768 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:18.977641+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1056768 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:19.977743+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1056768 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:20.977851+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:21.977967+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:22.978111+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:23.978240+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:24.978344+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:25.978450+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:26.978552+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:27.978646+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:28.978943+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:29.979031+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:30.979161+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:31.979264+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:32.979395+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:33.979555+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:34.979660+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:35.979756+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:36.979921+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:37.980075+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:38.980180+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:39.980292+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:40.980430+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:41.980536+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:42.980662+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:43.980765+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:44.980935+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912069 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:45.981026+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:46.981119+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:47.981208+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:48.981303+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 101.919792175s of 102.060813904s, submitted: 258
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:49.981441+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:50.981558+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:51.981703+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:52.981868+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:53.982032+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:54.982161+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:55.982275+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:56.982403+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:57.982519+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:58.982627+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:59.982738+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:00.982871+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:01.982975+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:02.983109+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:03.983225+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:04.983342+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:05.983455+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:06.983551+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:07.983657+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:08.983766+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:09.983868+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:10.983979+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:11.984080+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:12.984219+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:13.984336+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:14.984445+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:15.984546+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:16.984715+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:17.984866+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:18.984996+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:19.985122+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:20.985281+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:21.985463+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:22.985621+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:23.985781+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:24.985905+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:25.986016+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:26.986179+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:27.986284+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:28.986395+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:29.986514+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:30.986627+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:31.986722+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:32.986869+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:33.986991+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:34.987117+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:35.987268+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:36.987377+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:37.987508+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:38.987619+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:39.987738+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 25 10:11:13 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2482510223' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:40.987865+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:41.987971+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:42.988121+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:43.988246+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:44.988384+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:45.988453+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:46.988606+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:47.988731+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:48.988861+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:49.989735+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:50.989840+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:51.989966+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:52.990115+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:53.990229+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:54.990327+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:55.990432+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:56.990534+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:57.990634+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:58.990727+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:59.990828+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:00.990927+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:01.991064+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:02.991209+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:03.991312+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:04.991442+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:05.991584+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:06.991670+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:07.992724+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:08.992821+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:09.992925+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:10.993045+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:11.993187+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:12.993349+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:13.993484+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:14.993579+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:15.993670+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:16.993787+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:17.993917+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:18.994032+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:19.994134+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913581 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:20.994237+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:21.994347+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1fc00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 92.995887756s of 92.997634888s, submitted: 1
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:22.994472+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:23.994580+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:24.994713+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915093 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:25.994813+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:26.994915+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:27.995487+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:28.995666+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:29.995759+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914502 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:30.995872+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:31.995977+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:32.996091+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:33.996186+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:34.996284+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914502 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:35.996388+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:36.996467+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:37.996559+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:38.996646+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:39.996746+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914502 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:40.996835+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.800531387s of 18.803356171s, submitted: 2
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:41.996928+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:42.997037+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:43.997128+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:44.997223+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916014 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:45.997313+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:46.997400+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:47.997504+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:48.997603+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:49.997718+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:50.997815+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915423 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:51.997921+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d1b1fc00 session 0x5584d4ed5860
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:52.998135+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:53.998241+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:54.998384+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:55.998541+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915423 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:56.998685+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:57.998798+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:58.998928+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:59.999035+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:00.999145+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915423 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:01.999276+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:02.999392+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:03.999521+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:04.999657+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:05.999758+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915423 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:06.999907+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:08.000005+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.186798096s of 27.189485550s, submitted: 2
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:09.000101+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:10.000204+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:11.000300+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916935 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:12.000395+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:13.000529+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:14.000656+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:15.000797+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:16.000947+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916344 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:17.001049+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:18.001157+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:19.001316+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:20.001432+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d1e0d800 session 0x5584d4ed4b40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:21.001548+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916344 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:22.001642+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:23.001756+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:24.001857+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:25.001962+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:26.002036+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916344 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:27.002128+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:28.002223+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:29.002359+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:30.002494+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:31.002650+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916344 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:32.002786+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:33.002950+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:34.003056+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:35.003161+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:36.003272+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 909312 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916344 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:37.003352+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 909312 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1f17000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.246471405s of 28.250848770s, submitted: 2
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:38.003450+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 909312 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:39.003554+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 909312 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:40.003654+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 909312 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:41.003745+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917856 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:42.003843+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:43.003956+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:44.004057+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:45.004191+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:46.004298+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:47.004404+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:48.004534+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:49.004658+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:50.004795+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:51.004909+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:52.005058+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:53.005215+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:54.005384+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:55.005540+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:56.005668+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:57.005770+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:58.005944+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:59.006071+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:00.006223+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:01.006332+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:02.006849+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:03.006985+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:04.007107+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:05.007215+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:06.007324+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:07.007458+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:08.007602+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:09.007734+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:10.007868+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:11.007968+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:12.008102+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d2820400 session 0x5584d4902780
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:13.008216+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:14.008320+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:15.008448+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:16.008576+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:17.008710+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:18.008842+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:19.008988+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:20.009117+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:21.009231+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:22.009356+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:23.009465+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:24.009591+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:25.009693+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:26.009812+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917265 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 49.348262787s of 49.350105286s, submitted: 2
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:27.009949+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:28.010072+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:29.010195+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:30.010319+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:31.010455+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920289 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:32.010590+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:33.010778+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:34.010909+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:35.011114+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:36.011268+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:37.011457+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:38.011591+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:39.011715+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:40.011858+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:41.012016+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:42.012160+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:43.012351+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:44.012444+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:45.012596+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:46.012689+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:47.013158+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:48.013319+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:49.013481+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:50.013615+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:51.013768+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:52.013899+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:53.014043+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:54.014147+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:55.014272+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:56.014400+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:57.014523+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:58.014637+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:59.014754+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:00.014904+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:01.015028+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:02.015189+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:03.015373+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:04.015486+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:05.015616+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:06.015749+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:07.015885+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:08.016040+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:09.016176+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:10.016308+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:11.016487+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:12.016615+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:13.016779+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:14.016907+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:15.017056+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: mgrc ms_handle_reset ms_handle_reset con 0x5584d133d400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/92811439
Nov 25 10:11:13 compute-1 ceph-osd[77354]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/92811439,v1:192.168.122.100:6801/92811439]
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: get_auth_request con 0x5584d1b1fc00 auth_method 0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: mgrc handle_mgr_configure stats_period=5
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:16.017169+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d1e0d000 session 0x5584d1e73a40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1753400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:17.017301+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:18.017439+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:19.017597+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:20.017733+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:21.017866+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:22.018010+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d1f17000 session 0x5584d1e73680
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:23.018164+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:24.018270+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:25.018431+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:26.018587+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:27.018718+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:28.018884+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:29.019006+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:30.019141+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:31.019273+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:32.019403+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:33.019581+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:34.019722+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:35.019832+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:36.019959+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:37.020110+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:38.020235+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:39.020385+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 ms_handle_reset con 0x5584d2820800 session 0x5584d49023c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:40.020512+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:41.020625+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919698 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:42.020742+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:43.020870+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:44.021033+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:45.021176+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 78.668830872s of 78.672355652s, submitted: 3
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:46.021323+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:47.021450+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:48.021606+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:49.021720+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:50.021823+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:51.022001+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:52.022156+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:53.022336+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:54.022473+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:55.022641+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:56.022805+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:57.022944+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:58.023100+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:59.023271+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:00.023454+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:01.023583+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:02.023724+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:03.023885+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:04.024045+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:05.024181+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:06.024314+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:07.024455+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:08.024787+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:09.024951+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:10.025080+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:11.025237+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:12.025403+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:13.025578+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:14.025737+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:15.025893+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:16.026056+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 679936 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:17.026205+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:18.026351+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:19.026488+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:20.026635+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:21.026783+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:22.026894+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:23.027060+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:24.027188+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:25.027342+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:26.027503+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:27.027660+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:28.027781+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:29.027894+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:30.028051+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:31.028200+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919107 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:32.028325+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:33.028485+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:34.028623+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:35.028762+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 50.528675079s of 50.530040741s, submitted: 1
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:36.028891+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:37.029023+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:38.029159+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:39.029305+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:40.029446+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:41.029578+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 663552 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:42.029746+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 663552 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:43.029893+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:44.030059+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:45.030221+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:46.030315+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:47.030468+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:48.030622+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:49.031336+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:50.031437+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:51.031583+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:52.031705+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:53.031876+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:54.032040+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:55.032193+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:56.032348+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:57.032508+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:58.032666+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:59.032821+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:00.032955+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:01.033115+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:02.033269+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:03.033444+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:04.033583+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:05.033710+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:06.033873+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:07.034033+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:08.034196+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:09.034304+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:10.034457+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:11.034558+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:12.034717+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:13.034967+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:14.035147+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:15.035340+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:16.035529+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920619 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:17.035704+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:18.035930+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fca16000/0x0/0x4ffc00000, data 0x166cc1/0x206000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:19.036124+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:20.036272+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.554691315s of 44.555805206s, submitted: 1
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1d57400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:21.036387+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 17145856 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034365 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 138 ms_handle_reset con 0x5584d1d57400 session 0x5584d4f2e960
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:22.036512+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 17014784 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _renew_subs
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 140 ms_handle_reset con 0x5584d1e0d800 session 0x5584d4f2f0e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fad9a000/0x0/0x4ffc00000, data 0x1ddd005/0x1e80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:23.036682+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 16982016 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:24.036836+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:25.037016+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad91000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:26.037202+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131657 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:27.037382+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:28.037531+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad91000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:29.037656+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad91000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad91000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:30.037817+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 ms_handle_reset con 0x5584d1e0d000 session 0x5584d1f23680
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:31.037955+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131657 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:32.038092+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad91000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:33.038273+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1f17000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.414980888s of 13.458241463s, submitted: 60
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:34.038388+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:35.038537+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad91000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 16924672 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:36.038690+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 16916480 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133169 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:37.038842+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 16908288 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:38.038968+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 16908288 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:39.039131+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 16908288 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:40.039284+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 16900096 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:41.039469+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 16900096 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131213 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:42.039625+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 16900096 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:43.039759+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 16900096 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:44.039892+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 16900096 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 ms_handle_reset con 0x5584d1f17000 session 0x5584d46ae780
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:45.039997+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:46.040140+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131213 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:47.040244+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.465446472s of 13.467451096s, submitted: 2
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:48.040395+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:49.040550+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:50.040689+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:51.040828+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132725 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:52.040949+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:53.041103+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:54.041260+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:55.041380+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:56.041523+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x1de1102/0x1e87000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 16891904 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132725 data_alloc: 218103808 data_used: 212992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:57.041677+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 ms_handle_reset con 0x5584d2820800 session 0x5584d4ed5e00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1f16c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 ms_handle_reset con 0x5584d1f16c00 session 0x5584d4ed4000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 ms_handle_reset con 0x5584d1e0d000 session 0x5584d4f5a1e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 16875520 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:58.041803+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.333181381s of 11.334502220s, submitted: 1
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 8929280 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 7032 writes, 27K keys, 7032 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7032 writes, 1507 syncs, 4.67 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 646 writes, 1251 keys, 646 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s
                                           Interval WAL: 646 writes, 310 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d05009b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5584d0501350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 ms_handle_reset con 0x5584d1e0d800 session 0x5584d4f5a5a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:59.041903+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1f17000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 ms_handle_reset con 0x5584d1f17000 session 0x5584d429e780
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 95150080 unmapped: 2334720 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:00.042047+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 95150080 unmapped: 2334720 heap: 97484800 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:01.042193+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _renew_subs
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d2820800 session 0x5584d429e5a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d2145800 session 0x5584d2c905a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d1e0d000 session 0x5584d129e5a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d1e0d800 session 0x5584d1f232c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1f17000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d1f17000 session 0x5584d27174a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 95715328 unmapped: 4931584 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214838 data_alloc: 234881024 data_used: 13844480
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:02.042329+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac5a000/0x0/0x4ffc00000, data 0x1f17341/0x1fc0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d2820800 session 0x5584d2716b40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 95715328 unmapped: 4931584 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:03.042485+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 95715328 unmapped: 4931584 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:04.042582+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d2145400 session 0x5584d4703e00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d1e0d000 session 0x5584d47021e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 143 ms_handle_reset con 0x5584d1e0d800 session 0x5584d42a03c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 95617024 unmapped: 5029888 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:05.042699+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1f17000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 4988928 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa95b000/0x0/0x4ffc00000, data 0x2217364/0x22c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:06.042802+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa95b000/0x0/0x4ffc00000, data 0x2217364/0x22c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98615296 unmapped: 2031616 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235935 data_alloc: 234881024 data_used: 17027072
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:07.042931+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:08.043074+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:09.043205+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:10.043326+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.486788750s of 11.513138771s, submitted: 33
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:11.043466+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa957000/0x0/0x4ffc00000, data 0x2219336/0x22c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239062 data_alloc: 234881024 data_used: 17031168
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:12.043613+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa957000/0x0/0x4ffc00000, data 0x2219336/0x22c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:13.043770+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:14.043892+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 2023424 heap: 100646912 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:15.044001+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa957000/0x0/0x4ffc00000, data 0x2219336/0x22c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 101474304 unmapped: 1269760 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:16.044143+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266752 data_alloc: 234881024 data_used: 17367040
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:17.044280+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:18.044439+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:19.044586+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa7aa000/0x0/0x4ffc00000, data 0x23be336/0x2469000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:20.044888+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:21.045008+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266752 data_alloc: 234881024 data_used: 17367040
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:22.045147+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa7aa000/0x0/0x4ffc00000, data 0x23be336/0x2469000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.168714523s of 12.192200661s, submitted: 43
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa7aa000/0x0/0x4ffc00000, data 0x23be336/0x2469000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:23.045276+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:24.045403+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:25.045528+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:26.045688+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa7aa000/0x0/0x4ffc00000, data 0x23be336/0x2469000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa7aa000/0x0/0x4ffc00000, data 0x23be336/0x2469000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266768 data_alloc: 234881024 data_used: 17367040
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:27.045833+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa7aa000/0x0/0x4ffc00000, data 0x23be336/0x2469000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:28.045995+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:29.046114+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:30.046257+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:31.046363+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266768 data_alloc: 234881024 data_used: 17367040
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:32.046498+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa7aa000/0x0/0x4ffc00000, data 0x23be336/0x2469000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:33.046646+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 655360 heap: 102744064 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:34.046781+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145000 session 0x5584d4f5b0e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2144c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2144c00 session 0x5584d4f5af00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2144800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2144800 session 0x5584d2717c20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103481344 unmapped: 311296 heap: 103792640 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d4b60b40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:35.046888+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d800 session 0x5584d471de00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2144c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.144062996s of 13.145666122s, submitted: 1
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2144c00 session 0x5584d4ed4000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145000 session 0x5584d4ed1680
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2144400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2144400 session 0x5584d4f5b860
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d4b614a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d800 session 0x5584d4903860
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 10338304 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:36.046992+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 10338304 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329516 data_alloc: 234881024 data_used: 18419712
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:37.047084+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9f79000/0x0/0x4ffc00000, data 0x2bf8336/0x2ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 10338304 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:38.047214+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2144c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2144c00 session 0x5584d46b1e00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9f79000/0x0/0x4ffc00000, data 0x2bf8336/0x2ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 10354688 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:39.047336+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145000 session 0x5584d47023c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 10354688 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:40.047594+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d48b5800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d48b5800 session 0x5584d4ed01e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d4ed1e00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 10338304 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:41.047692+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4b9a800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 103096320 unmapped: 10280960 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336051 data_alloc: 234881024 data_used: 18481152
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:42.047793+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9f54000/0x0/0x4ffc00000, data 0x2c1c346/0x2cc8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [1])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9f54000/0x0/0x4ffc00000, data 0x2c1c346/0x2cc8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 2965504 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:43.047913+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9f54000/0x0/0x4ffc00000, data 0x2c1c346/0x2cc8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 2965504 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:44.048021+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 2965504 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:45.048164+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 2965504 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:46.048319+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 2965504 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391379 data_alloc: 234881024 data_used: 25964544
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:47.048439+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.915447235s of 11.942303658s, submitted: 23
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9f54000/0x0/0x4ffc00000, data 0x2c1c346/0x2cc8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 2965504 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:48.048534+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 2965504 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:49.048644+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9f52000/0x0/0x4ffc00000, data 0x2c1d346/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 2932736 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:50.048772+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 2932736 heap: 113377280 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:51.048907+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 1998848 heap: 120717312 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1474675 data_alloc: 234881024 data_used: 26361856
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8470000/0x0/0x4ffc00000, data 0x3560346/0x360c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:52.049002+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120733696 unmapped: 1032192 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:53.049103+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82d7000/0x0/0x4ffc00000, data 0x36f8346/0x37a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 999424 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:54.049227+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82d7000/0x0/0x4ffc00000, data 0x36f8346/0x37a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 991232 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:55.049382+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 991232 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:56.049531+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 983040 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491945 data_alloc: 234881024 data_used: 26857472
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:57.049685+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.966114044s of 10.044518471s, submitted: 112
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 3186688 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:58.049814+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 3186688 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:59.049940+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82b7000/0x0/0x4ffc00000, data 0x3719346/0x37c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 3186688 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:00.050038+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82b7000/0x0/0x4ffc00000, data 0x3719346/0x37c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 3186688 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:01.050207+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 3186688 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1482465 data_alloc: 234881024 data_used: 26857472
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:02.050368+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4b9a800 session 0x5584d4f2f860
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d47f0b40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bab000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bab000 session 0x5584d42a1e00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 9961472 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:03.050563+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 9961472 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:04.050701+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 9961472 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:05.050863+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9612000/0x0/0x4ffc00000, data 0x23bf336/0x246a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 9961472 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:06.051061+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 9961472 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1276728 data_alloc: 234881024 data_used: 17502208
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:07.051178+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1f17000 session 0x5584d4f2ed20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820800 session 0x5584d4f2ef00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.505298615s of 10.529612541s, submitted: 37
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 9953280 heap: 121765888 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:08.051309+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d4f2f860
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 14532608 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:09.051399+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108453888 unmapped: 14360576 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:10.051535+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108453888 unmapped: 14360576 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:11.051708+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108453888 unmapped: 14360576 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205070 data_alloc: 234881024 data_used: 13979648
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:12.051878+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108453888 unmapped: 14360576 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:13.052001+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108453888 unmapped: 14360576 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:14.052136+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 14352384 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:15.052283+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 14352384 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:16.052443+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 14352384 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205070 data_alloc: 234881024 data_used: 13979648
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:17.052544+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 14352384 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:18.052685+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:19.052825+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:20.052920+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d45c4780
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:21.053047+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205070 data_alloc: 234881024 data_used: 13979648
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:22.053195+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:23.053334+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:24.053453+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:25.053590+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 14344192 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:26.053733+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 14336000 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205070 data_alloc: 234881024 data_used: 13979648
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:27.053840+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 14336000 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:28.053969+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d47f2000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4b9a800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.747106552s of 20.908111572s, submitted: 283
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4b9a800 session 0x5584d44301e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d27161e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d4953e00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820800 session 0x5584d4953860
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110108672 unmapped: 12705792 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d471de00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:29.054072+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110108672 unmapped: 12705792 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:30.054213+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9668000/0x0/0x4ffc00000, data 0x1f5a313/0x2004000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110108672 unmapped: 12705792 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:31.054312+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bab000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bab000 session 0x5584d47f0b40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9668000/0x0/0x4ffc00000, data 0x1f5a313/0x2004000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110108672 unmapped: 12705792 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224133 data_alloc: 234881024 data_used: 14897152
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:32.054484+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110116864 unmapped: 12697600 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:33.054661+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d4f2e960
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9668000/0x0/0x4ffc00000, data 0x1f5a313/0x2004000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110116864 unmapped: 12697600 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:34.054813+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d45d30e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d45d25a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 12673024 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:35.054932+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d47d0000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 13115392 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:36.055054+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 13115392 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238946 data_alloc: 234881024 data_used: 16240640
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:37.055177+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9667000/0x0/0x4ffc00000, data 0x1f5a322/0x2005000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 13115392 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:38.055325+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 13115392 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:39.055451+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 13107200 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:40.055571+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.456128120s of 11.477423668s, submitted: 26
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 13107200 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:41.055666+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:42.055783+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 13107200 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238355 data_alloc: 234881024 data_used: 16240640
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:43.055919+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 13107200 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9667000/0x0/0x4ffc00000, data 0x1f5a322/0x2005000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:44.056044+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 13107200 heap: 122814464 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:45.056146+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 6742016 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:46.056290+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 8585216 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:47.056399+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 8585216 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347403 data_alloc: 234881024 data_used: 17268736
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f88d8000/0x0/0x4ffc00000, data 0x2ce9322/0x2d94000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:48.056529+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 8585216 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:49.056636+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 8585216 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:50.056765+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 8585216 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:51.056876+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115294208 unmapped: 8577024 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f88d8000/0x0/0x4ffc00000, data 0x2ce9322/0x2d94000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.482524872s of 11.550142288s, submitted: 118
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:52.057035+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 8880128 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345067 data_alloc: 234881024 data_used: 17268736
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:53.057197+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 8880128 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:54.057344+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 8880128 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:55.057476+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 8880128 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f88d5000/0x0/0x4ffc00000, data 0x2cec322/0x2d97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:56.057611+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 8880128 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d4d845a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:57.057766+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345067 data_alloc: 234881024 data_used: 17268736
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:58.057893+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:59.058001+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:00.058179+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f88d5000/0x0/0x4ffc00000, data 0x2cec322/0x2d97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:01.058352+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:02.058512+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345067 data_alloc: 234881024 data_used: 17268736
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:03.058694+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:04.058818+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 8871936 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f88d5000/0x0/0x4ffc00000, data 0x2cec322/0x2d97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:05.058968+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 8847360 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:06.059145+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 8847360 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f88d5000/0x0/0x4ffc00000, data 0x2cec322/0x2d97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:07.059233+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 8847360 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345371 data_alloc: 234881024 data_used: 17276928
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.049795151s of 16.052217484s, submitted: 3
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820800 session 0x5584d39f81e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d47d0000 session 0x5584d429ef00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:08.059407+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 8847360 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d39ade00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:09.059632+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 10575872 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:10.059802+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 10575872 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:11.059971+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 10575872 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:12.060213+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 10575872 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218449 data_alloc: 234881024 data_used: 14897152
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:13.060392+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 10575872 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:14.060506+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 10575872 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:15.060646+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:16.060754+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:17.060891+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218449 data_alloc: 234881024 data_used: 14897152
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:18.061055+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:19.061302+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.493480682s of 11.520867348s, submitted: 50
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:20.061465+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:21.061625+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:22.061756+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217858 data_alloc: 234881024 data_used: 14897152
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:23.061901+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:24.062000+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:25.062173+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:26.062331+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:27.062447+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d47f50e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d4d85680
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4c17c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4c17c00 session 0x5584d44512c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 10567680 heap: 123871232 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217858 data_alloc: 234881024 data_used: 14897152
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d45d2f00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d47025a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d45c45a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d47d0000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d47d0000 session 0x5584d47f2960
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4c17800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4c17800 session 0x5584d42a25a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d287cd20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:28.062611+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 11575296 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:29.062720+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 11575296 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f959d000/0x0/0x4ffc00000, data 0x2024375/0x20cf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:30.062887+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 11542528 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:31.063050+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 11542528 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:32.063173+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 11542528 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248433 data_alloc: 234881024 data_used: 14897152
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f959d000/0x0/0x4ffc00000, data 0x2024375/0x20cf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:33.063927+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 11542528 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.898181915s of 13.919568062s, submitted: 34
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d3c56960
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d47d0000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:34.064035+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 11198464 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:35.064157+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113598464 unmapped: 11321344 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:36.064282+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:37.064477+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260815 data_alloc: 234881024 data_used: 15953920
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9578000/0x0/0x4ffc00000, data 0x2048398/0x20f4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:38.064573+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:39.064675+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:40.064786+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:41.064889+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9578000/0x0/0x4ffc00000, data 0x2048398/0x20f4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:42.065012+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260815 data_alloc: 234881024 data_used: 15953920
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:43.065166+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 11313152 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.019714355s of 10.027096748s, submitted: 11
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:44.065283+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 11157504 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f91fc000/0x0/0x4ffc00000, data 0x23b8398/0x2464000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:45.065390+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:46.065482+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:47.065585+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300051 data_alloc: 234881024 data_used: 16035840
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:48.065691+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:49.065789+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f91ee000/0x0/0x4ffc00000, data 0x23d2398/0x247e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:50.065906+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:51.066041+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f91ee000/0x0/0x4ffc00000, data 0x23d2398/0x247e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:52.066150+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300067 data_alloc: 234881024 data_used: 16035840
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d4953680
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:53.066301+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f91ee000/0x0/0x4ffc00000, data 0x23d2398/0x247e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:54.066441+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:55.066582+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:56.066729+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:57.066897+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 10493952 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300067 data_alloc: 234881024 data_used: 16035840
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f91ee000/0x0/0x4ffc00000, data 0x23d2398/0x247e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:58.067083+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114434048 unmapped: 10485760 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:59.067194+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114434048 unmapped: 10485760 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:00.067296+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114434048 unmapped: 10485760 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:01.067387+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4c17400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.798303604s of 17.844646454s, submitted: 87
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115572736 unmapped: 9347072 heap: 124919808 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4c17400 session 0x5584d49523c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfc000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfc000 session 0x5584d47f30e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d3a20000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d47f01e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d49523c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:02.067511+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114475008 unmapped: 11493376 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330200 data_alloc: 234881024 data_used: 16035840
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f99000/0x0/0x4ffc00000, data 0x2627398/0x26d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:03.067700+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114483200 unmapped: 11485184 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:04.067882+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114483200 unmapped: 11485184 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:05.068042+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114483200 unmapped: 11485184 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f99000/0x0/0x4ffc00000, data 0x2627398/0x26d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:06.068188+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 11419648 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:07.068297+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4c17400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4c17400 session 0x5584d47f3680
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114532352 unmapped: 11436032 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330200 data_alloc: 234881024 data_used: 16035840
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfcc00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfcc00 session 0x5584d47f23c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:08.068404+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114532352 unmapped: 11436032 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f99000/0x0/0x4ffc00000, data 0x2627398/0x26d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d47f2960
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d47f32c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4c17400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:09.068554+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 11878400 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f98000/0x0/0x4ffc00000, data 0x26273bb/0x26d4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:10.068664+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 11968512 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:11.068812+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:12.068929+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341382 data_alloc: 234881024 data_used: 16977920
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f98000/0x0/0x4ffc00000, data 0x26273bb/0x26d4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:13.069106+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:14.069255+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f98000/0x0/0x4ffc00000, data 0x26273bb/0x26d4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:15.069370+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:16.069506+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d49025a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:17.069685+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341382 data_alloc: 234881024 data_used: 16977920
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f98000/0x0/0x4ffc00000, data 0x26273bb/0x26d4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f98000/0x0/0x4ffc00000, data 0x26273bb/0x26d4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:18.069818+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 114065408 unmapped: 11902976 heap: 125968384 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.518260956s of 17.544563293s, submitted: 39
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:19.069929+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 9822208 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:20.070042+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 11419648 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:21.070157+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 11329536 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:22.070265+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 11329536 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417958 data_alloc: 234881024 data_used: 17158144
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f853f000/0x0/0x4ffc00000, data 0x30803bb/0x312d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:23.070389+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 11329536 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:24.070518+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 11329536 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f853f000/0x0/0x4ffc00000, data 0x30803bb/0x312d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:25.070619+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 11329536 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:26.070715+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 11059200 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:27.070788+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 11059200 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417318 data_alloc: 234881024 data_used: 17158144
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d3eeb2c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4c17400 session 0x5584d3a21680
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:28.070885+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115867648 unmapped: 11157504 heap: 127025152 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d46af680
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:29.070992+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 12926976 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:30.071156+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 12926976 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.546496391s of 11.653998375s, submitted: 179
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1e0d000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f91ee000/0x0/0x4ffc00000, data 0x23d2398/0x247e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:31.071272+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 11837440 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d4450000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d47d0000 session 0x5584d287d2c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d287cd20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:32.071375+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 12533760 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246060 data_alloc: 234881024 data_used: 14766080
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:33.071633+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 12533760 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:34.071794+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f950f000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f950f000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:35.071917+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:36.072071+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:37.072192+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247572 data_alloc: 234881024 data_used: 14766080
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:38.072335+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:39.072438+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:40.072563+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:41.072673+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:42.072804+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244413 data_alloc: 234881024 data_used: 14766080
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:43.072938+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:44.073075+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:45.073183+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:46.073284+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:47.073402+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244413 data_alloc: 234881024 data_used: 14766080
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:48.073550+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:49.073642+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:50.073770+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:51.073888+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:52.074003+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244413 data_alloc: 234881024 data_used: 14766080
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:53.074180+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:54.074319+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 12517376 heap: 128073728 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:55.074443+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfdc00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfdc00 session 0x5584d46aef00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d4431860
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d4ed1860
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d4451e00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d47d0000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.848478317s of 24.885297775s, submitted: 68
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d47d0000 session 0x5584d3c57680
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 19963904 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfbc00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfbc00 session 0x5584d29e6f00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfbc00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfbc00 session 0x5584d28ed2c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d42a1c20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d4ed5680
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:56.074586+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 19963904 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:57.074689+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115466240 unmapped: 19955712 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315631 data_alloc: 234881024 data_used: 14766080
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f51000/0x0/0x4ffc00000, data 0x2670323/0x271b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d1e72780
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:58.075167+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d47d0000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d47d0000 session 0x5584d457f2c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115466240 unmapped: 19955712 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d2bd45a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2145c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2145c00 session 0x5584d4ed0f00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:59.075285+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfbc00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 19652608 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:00.075388+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 16015360 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:01.075519+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 16015360 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f2d000/0x0/0x4ffc00000, data 0x2694323/0x273f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:02.075675+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 15982592 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370251 data_alloc: 234881024 data_used: 19406848
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:03.075824+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 15982592 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:04.075952+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 15982592 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:05.076111+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 15982592 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:06.076286+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 15982592 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:07.076468+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 15982592 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370251 data_alloc: 234881024 data_used: 19406848
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f2d000/0x0/0x4ffc00000, data 0x2694323/0x273f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:08.076654+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 119455744 unmapped: 15966208 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8f2d000/0x0/0x4ffc00000, data 0x2694323/0x273f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.643066406s of 13.664609909s, submitted: 19
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:09.076776+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 9805824 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2ca8323/0x2d53000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:10.076945+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125984768 unmapped: 9437184 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:11.077096+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125984768 unmapped: 9437184 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:12.077331+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125984768 unmapped: 9437184 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430737 data_alloc: 234881024 data_used: 19668992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:13.077480+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125984768 unmapped: 9437184 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:14.077620+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125984768 unmapped: 9437184 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:15.077733+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125992960 unmapped: 9428992 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:16.077868+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125992960 unmapped: 9428992 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:17.078041+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125992960 unmapped: 9428992 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430753 data_alloc: 234881024 data_used: 19668992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:18.078240+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125992960 unmapped: 9428992 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:19.078454+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 9412608 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:20.078919+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126017536 unmapped: 9404416 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:21.079021+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 9396224 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:22.079168+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 9396224 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430753 data_alloc: 234881024 data_used: 19668992
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:23.079680+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 9363456 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:24.079835+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 9363456 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:25.079990+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 9363456 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:26.080124+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 9363456 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:27.080281+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 9363456 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430905 data_alloc: 234881024 data_used: 19673088
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:28.080442+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 9355264 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:29.080545+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfac00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfac00 session 0x5584d3eec3c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd400 session 0x5584d49534a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd800 session 0x5584d4796b40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 9355264 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d46b03c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.557857513s of 20.602546692s, submitted: 80
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd400 session 0x5584d3a21a40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfc800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfc800 session 0x5584d47f25a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d4431860
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d59c4400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d59c4400 session 0x5584d2bd5860
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d39ac5a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:30.080672+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f867b000/0x0/0x4ffc00000, data 0x2f45333/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 10158080 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:31.080770+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 10158080 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:32.080901+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 10158080 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1444859 data_alloc: 234881024 data_used: 19673088
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f867b000/0x0/0x4ffc00000, data 0x2f45333/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f867b000/0x0/0x4ffc00000, data 0x2f45333/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:33.081042+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 10158080 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d4ed5a40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:34.081182+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfc800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfc800 session 0x5584d4f2f0e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 10158080 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd400 session 0x5584d3a23e00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d59c4c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d59c4c00 session 0x5584d44310e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:35.081287+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 125009920 unmapped: 10412032 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:36.081382+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f867a000/0x0/0x4ffc00000, data 0x2f45343/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 8732672 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:37.081442+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 8732672 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464457 data_alloc: 234881024 data_used: 22360064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:38.081551+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 8732672 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:39.081656+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 8732672 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:40.081789+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 8732672 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.953464508s of 10.961739540s, submitted: 6
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:41.081889+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 8732672 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f867a000/0x0/0x4ffc00000, data 0x2f45343/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:42.082027+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 8691712 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464121 data_alloc: 234881024 data_used: 22360064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:43.082232+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 8691712 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:44.082356+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f867a000/0x0/0x4ffc00000, data 0x2f45343/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:45.082456+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 127205376 unmapped: 8216576 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f83b5000/0x0/0x4ffc00000, data 0x320a343/0x32b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:46.082548+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 127238144 unmapped: 8183808 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:47.082674+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 127238144 unmapped: 8183808 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1525997 data_alloc: 234881024 data_used: 22642688
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:48.082803+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 8175616 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:49.082931+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 8175616 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:50.083018+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 8175616 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:51.083115+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 8167424 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:52.083211+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.735983849s of 11.772041321s, submitted: 44
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 8716288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522799 data_alloc: 234881024 data_used: 22642688
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:53.083328+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 8716288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:54.083436+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 8675328 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:55.083585+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 8675328 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:56.083690+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 8675328 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:57.083817+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 8675328 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522799 data_alloc: 234881024 data_used: 22642688
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:58.083948+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 8667136 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:59.084091+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 8667136 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:00.084254+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 8667136 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:01.084365+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 8667136 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:02.084491+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522799 data_alloc: 234881024 data_used: 22642688
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:03.084645+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:04.084806+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:05.084942+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:06.085076+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:07.085210+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522799 data_alloc: 234881024 data_used: 22642688
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:08.085345+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:09.085451+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:10.085576+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:11.085730+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:12.085906+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522799 data_alloc: 234881024 data_used: 22642688
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:13.086035+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:14.086208+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f37000/0x0/0x4ffc00000, data 0x3688343/0x3735000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:15.086331+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 8658944 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:16.086509+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.080341339s of 24.082492828s, submitted: 2
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:17.086617+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523103 data_alloc: 234881024 data_used: 22642688
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:18.086752+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:19.086856+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:20.086990+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:21.087080+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:22.087239+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523103 data_alloc: 234881024 data_used: 22642688
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:23.087378+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 8642560 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:24.087522+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126787584 unmapped: 8634368 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:25.087693+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126787584 unmapped: 8634368 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:26.087806+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:27.087940+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523103 data_alloc: 234881024 data_used: 22642688
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:28.088066+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:29.088284+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:30.088474+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:31.088610+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:32.088791+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523103 data_alloc: 234881024 data_used: 22642688
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:33.088952+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:34.089109+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:35.089258+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:36.089389+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:37.089544+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 8617984 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523103 data_alloc: 234881024 data_used: 22642688
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:38.089666+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126812160 unmapped: 8609792 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:39.089799+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126812160 unmapped: 8609792 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:40.090182+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 8601600 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:41.090286+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 8601600 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:42.090366+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 8601600 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523103 data_alloc: 234881024 data_used: 22642688
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x3689343/0x3736000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:43.090563+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 8601600 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:44.090722+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 8601600 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:45.090855+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 8601600 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:46.090968+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 8601600 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d2716960
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d4ed5e00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfc800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.627803802s of 30.629236221s, submitted: 1
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:47.091090+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfc800 session 0x5584d47f3c20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 10764288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429886 data_alloc: 234881024 data_used: 19673088
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:48.091201+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 10764288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f890f000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:49.091315+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 10764288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:50.091482+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 10764288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:51.091604+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 10764288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:52.091750+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f890f000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d471c1e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfbc00 session 0x5584d41ee3c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 10764288 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429886 data_alloc: 234881024 data_used: 19673088
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f890f000/0x0/0x4ffc00000, data 0x2cb2323/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d44314a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:53.091895+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:54.092020+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:55.092180+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:56.092332+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:57.092491+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262508 data_alloc: 234881024 data_used: 11358208
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:58.092645+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:59.092799+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:00.092932+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:01.093070+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:02.093225+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262508 data_alloc: 234881024 data_used: 11358208
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d3c57860
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:03.093352+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:04.093492+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:05.093636+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:06.093775+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:07.093911+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262508 data_alloc: 234881024 data_used: 11358208
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:08.094053+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:09.094264+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:10.094407+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:11.094534+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 16703488 heap: 135421952 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:12.094663+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d47f45a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d47f4b40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfbc00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfbc00 session 0x5584d47f4d20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d47f4960
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.468971252s of 25.488265991s, submitted: 34
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d3eea780
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d41eed20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d41eef00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfc800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfc800 session 0x5584d3c56d20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d3eeba40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373069 data_alloc: 234881024 data_used: 11358208
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:13.094774+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f89a2000/0x0/0x4ffc00000, data 0x2c1e385/0x2cca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:14.094928+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:15.095077+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f89a2000/0x0/0x4ffc00000, data 0x2c1e385/0x2cca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:16.095244+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:17.095396+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373069 data_alloc: 234881024 data_used: 11358208
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:18.095576+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:19.095709+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f89a2000/0x0/0x4ffc00000, data 0x2c1e385/0x2cca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27312128 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d3a205a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:20.095842+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 28196864 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:21.095953+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126484480 unmapped: 19505152 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:22.096116+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126484480 unmapped: 19505152 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1477502 data_alloc: 234881024 data_used: 25718784
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:23.096273+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f897d000/0x0/0x4ffc00000, data 0x2c423a8/0x2cef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126484480 unmapped: 19505152 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:24.096879+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 126484480 unmapped: 19505152 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d3c56780
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d3c55860
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:25.097041+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.703448296s of 12.744194031s, submitted: 43
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd400 session 0x5584d39ac5a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:26.097149+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9757000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:27.097268+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9757000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273728 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:28.097397+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9757000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:29.097547+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9757000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:30.097710+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:31.097817+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:32.097937+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 27860992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273728 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d42a2000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d46b05a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d3a1c960
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d4430d20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:33.098053+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d59c4800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d59c4800 session 0x5584d471de00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d29e70e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d46afa40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d4f2f4a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d4953a40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0d000 session 0x5584d3eede00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 28483584 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:34.098217+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 28483584 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:35.098334+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 28483584 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:36.098522+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 28483584 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:37.098629+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 28483584 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363781 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:38.098766+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d46af680
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 28483584 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:39.098935+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 28483584 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:40.099063+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 117784576 unmapped: 28205056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:41.099147+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 23552000 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:42.099306+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 23552000 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454373 data_alloc: 234881024 data_used: 23724032
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:43.099496+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d46aef00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d44314a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 122470400 unmapped: 23519232 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.636554718s of 18.685222626s, submitted: 63
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d3a21c20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:44.099585+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x2ac1375/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29573120 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:45.099751+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97da000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29573120 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:46.099880+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29573120 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:47.100012+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29573120 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278553 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:48.100164+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97da000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29573120 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:49.100331+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29573120 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4c16000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:50.100471+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:51.100595+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:52.100801+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97da000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280065 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:53.100998+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:54.101172+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:55.101340+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97da000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:56.101445+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:57.101551+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280065 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:58.101684+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97da000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:59.101820+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:00.101953+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:01.102085+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:02.102220+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97da000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 29556736 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280065 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:03.102367+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4c17400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4c17400 session 0x5584d47f34a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d47f2b40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d47f2780
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d47f25a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.483383179s of 19.503284454s, submitted: 22
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d4f2fa40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160c00 session 0x5584d287c000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d457f2c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d457e3c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d4ed4000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 29270016 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:04.102504+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 29270016 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:05.102639+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 29270016 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:06.102740+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fc5313/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fc5313/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 29270016 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:07.102865+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 29270016 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303707 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:08.103000+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d4ed52c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6161c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 29270016 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:09.103104+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:10.103264+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:11.103370+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fc5313/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:12.103514+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fc5313/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316323 data_alloc: 234881024 data_used: 12931072
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fc5313/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:13.103671+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:14.103810+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:15.103977+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fc5313/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:16.104145+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:17.104273+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 29229056 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316323 data_alloc: 234881024 data_used: 12931072
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:18.104373+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.134925842s of 15.145795822s, submitted: 11
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 24338432 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:19.104527+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8ddd000/0x0/0x4ffc00000, data 0x27e5313/0x288f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 24338432 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:20.104613+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:21.104729+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8dc3000/0x0/0x4ffc00000, data 0x27f7313/0x28a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:22.104955+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1383479 data_alloc: 234881024 data_used: 14221312
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:23.105116+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:24.105253+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:25.105443+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8dc3000/0x0/0x4ffc00000, data 0x27f7313/0x28a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:26.105601+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8dc3000/0x0/0x4ffc00000, data 0x27f7313/0x28a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:27.105752+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1383631 data_alloc: 234881024 data_used: 14225408
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:28.105882+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:29.106013+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:30.106152+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8dc3000/0x0/0x4ffc00000, data 0x27f7313/0x28a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160000 session 0x5584d3c57a40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6161c00 session 0x5584d3a22d20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 22781952 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:31.106260+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.536352158s of 12.585522652s, submitted: 84
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d47f2960
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:32.106383+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284656 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:33.106551+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:34.106676+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:35.106811+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:36.106941+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:37.107045+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284656 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:38.107179+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:39.107317+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:40.107450+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:41.107557+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:42.107693+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284656 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:43.107894+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:44.108059+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:45.108198+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread fragmentation_score=0.000153 took=0.000046s
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 25739264 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:46.108335+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 25731072 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:47.108444+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 25731072 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284656 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:48.108558+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 25731072 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:49.108695+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 25731072 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:50.108826+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 25731072 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:51.108919+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 25731072 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:52.109038+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.299646378s of 21.308881760s, submitted: 13
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d3c57e00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d3a223c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4bfd000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4bfd000 session 0x5584d46aef00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d46af680
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d4f2e960
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 25772032 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368141 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:53.109191+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 25772032 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:54.109335+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 25772032 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:55.109475+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a7e000/0x0/0x4ffc00000, data 0x2733375/0x27de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 25772032 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:56.109591+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 25763840 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:57.109688+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 25763840 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368141 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:58.109801+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a7e000/0x0/0x4ffc00000, data 0x2733375/0x27de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 25763840 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:59.109904+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d3c58780
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6161c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 25812992 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:00.110006+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:01.110104+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a7d000/0x0/0x4ffc00000, data 0x2733398/0x27df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:02.110201+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426247 data_alloc: 234881024 data_used: 19419136
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:03.110352+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:04.110453+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:05.110551+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a7d000/0x0/0x4ffc00000, data 0x2733398/0x27df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:06.110652+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:07.110758+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 22929408 heap: 145989632 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426247 data_alloc: 234881024 data_used: 19419136
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:08.110858+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.777359962s of 15.817565918s, submitted: 53
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160800 session 0x5584d4f2fc20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d43b6c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d43b6c00 session 0x5584d47f43c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d39f83c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d4ed12c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d4ed0780
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f80f7000/0x0/0x4ffc00000, data 0x30b9398/0x3165000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 123617280 unmapped: 25526272 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:09.110962+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 19357696 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:10.111060+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160800 session 0x5584d4ed0f00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 17154048 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:11.111169+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d43c1400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d43c1400 session 0x5584d4ed0000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d3eede00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d3eecb40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131997696 unmapped: 17145856 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:12.111359+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 14237696 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1638084 data_alloc: 234881024 data_used: 23384064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:13.111608+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7368000/0x0/0x4ffc00000, data 0x3e29398/0x3ed5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 8151040 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:14.111774+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 8151040 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:15.111873+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 8151040 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:16.111975+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 8617984 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:17.112079+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:18.112224+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 8617984 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1670212 data_alloc: 251658240 data_used: 29474816
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:19.112355+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 8609792 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7368000/0x0/0x4ffc00000, data 0x3e48398/0x3ef4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:20.112459+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 8609792 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7368000/0x0/0x4ffc00000, data 0x3e48398/0x3ef4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:21.112556+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 8609792 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.249376297s of 13.353449821s, submitted: 191
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:22.112657+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 145309696 unmapped: 3833856 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:23.112826+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 6389760 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1764866 data_alloc: 251658240 data_used: 30367744
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f6819000/0x0/0x4ffc00000, data 0x4997398/0x4a43000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:24.113012+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142958592 unmapped: 6184960 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:25.113253+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142958592 unmapped: 6184960 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:26.113489+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142958592 unmapped: 6184960 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f6801000/0x0/0x4ffc00000, data 0x49af398/0x4a5b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:27.113714+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142966784 unmapped: 6176768 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:28.113843+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142966784 unmapped: 6176768 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765330 data_alloc: 251658240 data_used: 30367744
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:29.113969+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142966784 unmapped: 6176768 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:30.114076+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142974976 unmapped: 6168576 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f67fe000/0x0/0x4ffc00000, data 0x49b2398/0x4a5e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:31.114184+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142974976 unmapped: 6168576 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:32.114372+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142974976 unmapped: 6168576 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f67fe000/0x0/0x4ffc00000, data 0x49b2398/0x4a5e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:33.115043+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142974976 unmapped: 6168576 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1763018 data_alloc: 251658240 data_used: 30367744
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:34.115159+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142974976 unmapped: 6168576 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f67fe000/0x0/0x4ffc00000, data 0x49b2398/0x4a5e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:35.115267+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142974976 unmapped: 6168576 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:36.115503+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142974976 unmapped: 6168576 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:37.115654+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142983168 unmapped: 6160384 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:38.115816+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 142983168 unmapped: 6160384 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1763018 data_alloc: 251658240 data_used: 30367744
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d43f2d20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.842294693s of 16.914205551s, submitted: 108
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160800 session 0x5584d4952b40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4b9d000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4b9d000 session 0x5584d46afc20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:39.115959+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135061504 unmapped: 14082048 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7cdb000/0x0/0x4ffc00000, data 0x34d5398/0x3581000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:40.116107+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f7cdb000/0x0/0x4ffc00000, data 0x34d5398/0x3581000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135061504 unmapped: 14082048 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:41.116248+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135061504 unmapped: 14082048 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6161c00 session 0x5584d3a1c000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160400 session 0x5584d45d32c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:42.117593+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135102464 unmapped: 14041088 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d471da40
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:43.117726+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315640 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a29000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:44.117854+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:45.117957+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:46.118120+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:47.118665+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:48.118759+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315640 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a29000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:49.118914+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:50.119081+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:51.119217+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:52.119392+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a29000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:53.119542+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315640 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:54.119708+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a29000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:55.119846+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:56.119986+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a29000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:57.120081+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:58.120223+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315640 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:59.120401+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8a29000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:00.120562+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 18825216 heap: 149143552 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d3a1cd20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2845c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2845c00 session 0x5584d49521e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d3eec1e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d45d30e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.817098618s of 21.874700546s, submitted: 96
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160400 session 0x5584d3eec3c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6161c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6161c00 session 0x5584d4f2e780
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160800
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160800 session 0x5584d39acf00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d3a20d20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d2820400 session 0x5584d472e960
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:01.120667+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 28196864 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8d4c000/0x0/0x4ffc00000, data 0x2465323/0x2510000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:02.120816+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 28196864 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:03.120987+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 28196864 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367100 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:04.121140+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 28196864 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:05.121292+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 28196864 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6160400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6160400 session 0x5584d46ae5a0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:06.121460+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 28196864 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d6161c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d4b98c00
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8d4b000/0x0/0x4ffc00000, data 0x2465346/0x2511000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:07.121603+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130473984 unmapped: 28188672 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:08.121728+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415766 data_alloc: 234881024 data_used: 17817600
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:09.121880+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:10.122043+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:11.122197+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:12.122304+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8d4b000/0x0/0x4ffc00000, data 0x2465346/0x2511000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:13.122449+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415766 data_alloc: 234881024 data_used: 17817600
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:14.122607+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:15.122717+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8d4b000/0x0/0x4ffc00000, data 0x2465346/0x2511000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:16.122818+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 28696576 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.094482422s of 16.114328384s, submitted: 19
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:17.122926+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 136011776 unmapped: 22650880 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8d4b000/0x0/0x4ffc00000, data 0x2465346/0x2511000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:18.123037+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 22126592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1506740 data_alloc: 234881024 data_used: 18153472
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:19.123155+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 22126592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82da000/0x0/0x4ffc00000, data 0x2eb1346/0x2f5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:20.123305+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 22126592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:21.123437+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 22126592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:22.123575+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 22126592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82da000/0x0/0x4ffc00000, data 0x2eb1346/0x2f5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:23.123722+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135733248 unmapped: 22929408 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497156 data_alloc: 234881024 data_used: 18153472
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82fd000/0x0/0x4ffc00000, data 0x2eb3346/0x2f5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:24.123860+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135733248 unmapped: 22929408 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:25.123993+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135741440 unmapped: 22921216 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:26.124094+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135741440 unmapped: 22921216 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:27.124272+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135741440 unmapped: 22921216 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f82fd000/0x0/0x4ffc00000, data 0x2eb3346/0x2f5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:28.124468+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.581636429s of 11.648424149s, submitted: 112
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 135741440 unmapped: 22921216 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497380 data_alloc: 234881024 data_used: 18153472
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d6161c00 session 0x5584d47f03c0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d4b98c00 session 0x5584d2c2d0e0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:29.124628+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d1b1f400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1b1f400 session 0x5584d3c56d20
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:30.124765+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:31.124933+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:32.125057+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:33.125187+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:34.125318+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:35.125496+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:36.125615+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:37.125745+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:38.125845+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:39.125995+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:40.126131+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:41.126302+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130375680 unmapped: 28286976 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:42.126428+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:43.126579+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:44.126716+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:45.126853+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:46.126973+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:47.127104+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:48.127285+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:49.127432+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130383872 unmapped: 28278784 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:50.127593+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130392064 unmapped: 28270592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:51.127750+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130392064 unmapped: 28270592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:52.127904+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130392064 unmapped: 28270592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:53.128075+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130392064 unmapped: 28270592 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:54.128205+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130400256 unmapped: 28262400 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:55.128367+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130400256 unmapped: 28262400 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:56.128508+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130400256 unmapped: 28262400 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:57.128678+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130400256 unmapped: 28262400 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:58.128817+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 3198 syncs, 3.41 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3881 writes, 13K keys, 3881 commit groups, 1.0 writes per commit group, ingest: 16.69 MB, 0.03 MB/s
                                           Interval WAL: 3881 writes, 1691 syncs, 2.30 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:59.128949+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:00.129075+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:01.129191+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:02.129290+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:03.129441+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:04.129576+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:05.129712+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 28254208 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:06.129824+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130416640 unmapped: 28246016 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:07.129960+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130416640 unmapped: 28246016 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:08.130095+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130416640 unmapped: 28246016 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:09.130217+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130416640 unmapped: 28246016 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:10.130347+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130416640 unmapped: 28246016 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:11.130488+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130416640 unmapped: 28246016 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:12.130622+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:13.130770+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:14.130917+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:15.131051+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:16.131145+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:17.131236+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:18.131347+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:19.131459+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130424832 unmapped: 28237824 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:20.131568+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130433024 unmapped: 28229632 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:21.131669+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130433024 unmapped: 28229632 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:22.131781+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130433024 unmapped: 28229632 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:23.131903+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130433024 unmapped: 28229632 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:24.132032+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130433024 unmapped: 28229632 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:25.132133+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130433024 unmapped: 28229632 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:26.132237+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130441216 unmapped: 28221440 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:27.132342+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130441216 unmapped: 28221440 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:28.132468+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130441216 unmapped: 28221440 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:29.132605+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'config diff' '{prefix=config diff}'
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'config show' '{prefix=config show}'
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 28467200 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:30.132715+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 129744896 unmapped: 28917760 heap: 158662656 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:31.132817+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'log dump' '{prefix=log dump}'
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 141115392 unmapped: 28590080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'perf dump' '{prefix=perf dump}'
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:32.132922+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'perf schema' '{prefix=perf schema}'
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 39395328 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:33.133044+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 39395328 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:34.133156+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 39395328 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:35.133276+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 39395328 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:36.133392+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 39395328 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:37.133460+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 39395328 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:38.133562+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 39387136 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:39.133660+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 39387136 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:40.133766+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 39387136 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:41.133876+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130326528 unmapped: 39378944 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:42.133973+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130326528 unmapped: 39378944 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:43.134103+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130326528 unmapped: 39378944 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:44.134210+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130326528 unmapped: 39378944 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:45.134320+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130326528 unmapped: 39378944 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:46.134444+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130326528 unmapped: 39378944 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:47.134554+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130334720 unmapped: 39370752 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:48.134652+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130334720 unmapped: 39370752 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:49.134766+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130334720 unmapped: 39370752 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:50.134877+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130334720 unmapped: 39370752 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:51.134990+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130342912 unmapped: 39362560 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:52.135091+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130342912 unmapped: 39362560 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:53.135211+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130342912 unmapped: 39362560 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:54.135314+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130342912 unmapped: 39362560 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:55.135439+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130342912 unmapped: 39362560 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:56.135563+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130342912 unmapped: 39362560 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:57.135674+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130342912 unmapped: 39362560 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:58.135840+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130342912 unmapped: 39362560 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:59.135998+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130342912 unmapped: 39362560 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:00.136145+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130351104 unmapped: 39354368 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:01.136287+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130351104 unmapped: 39354368 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:02.136440+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130351104 unmapped: 39354368 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:03.136586+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130351104 unmapped: 39354368 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325676 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:04.136691+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130351104 unmapped: 39354368 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:05.136867+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130351104 unmapped: 39354368 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93ca000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:06.137036+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130351104 unmapped: 39354368 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:07.137189+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130351104 unmapped: 39354368 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 99.749176025s of 99.777038574s, submitted: 43
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:08.137324+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130359296 unmapped: 39346176 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:09.137428+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:10.137579+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:11.137707+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:12.137825+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:13.137966+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:14.138131+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:15.138281+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:16.138378+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:17.138538+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:18.138669+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:19.138850+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:20.139004+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:21.139138+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:22.139306+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:23.139510+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:24.139645+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 38944768 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:25.139768+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 38936576 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:26.139876+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 38936576 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:27.140029+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 38936576 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:28.140162+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 38936576 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:29.140325+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 38936576 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:30.140441+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 38936576 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:31.140566+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 38936576 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:32.140688+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 38936576 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:33.140857+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 38936576 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:34.141013+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 38928384 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:35.141177+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 38928384 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:36.141343+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 38928384 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:37.141503+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 38928384 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:38.141623+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 38928384 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:39.141776+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 38928384 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:40.141911+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:41.142035+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:42.142165+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:43.142350+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:44.142475+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:45.142596+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:46.142722+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:47.142849+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:48.143002+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:49.143552+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:50.143685+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:51.143812+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:52.143953+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:53.144103+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:54.144222+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:55.144341+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:56.144454+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:57.144576+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:58.144719+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:59.144915+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:00.145053+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:01.145166+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:02.145317+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:03.145486+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:04.145615+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:05.145746+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 38920192 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:06.145874+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 38912000 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:07.146023+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 38912000 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:08.146167+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 38912000 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:09.146328+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 38912000 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:10.146479+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 38912000 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:11.146615+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 38903808 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:12.146762+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 38903808 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:13.146936+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 38903808 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:14.147061+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 38903808 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:15.147174+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 38903808 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:16.147342+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 38903808 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1e0c800 session 0x5584d1e73860
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2820400
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 ms_handle_reset con 0x5584d1753400 session 0x5584d4f5a780
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: handle_auth_request added challenge on 0x5584d2821000
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:17.147501+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 38887424 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:18.147713+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 38887424 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:19.147867+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 38887424 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:20.148023+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 38887424 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:21.148169+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 38887424 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:22.148322+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 38887424 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:23.148464+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 38887424 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:24.148714+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 38887424 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:25.148867+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 38887424 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:26.148996+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 38887424 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:27.149136+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 38887424 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:28.149266+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 38887424 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:29.149407+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 38887424 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:30.149570+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 38879232 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:31.149706+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 38879232 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:32.149836+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 38879232 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:33.149997+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 38879232 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:34.150132+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 38879232 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:35.150270+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 38879232 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:36.150439+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 38879232 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:37.150578+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 38879232 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:38.150703+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 38871040 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:39.150827+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 38871040 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:40.150925+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 38871040 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:41.151024+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 38871040 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:42.151153+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 38871040 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:43.151350+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 38871040 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:44.151490+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 38871040 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:45.151632+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 38871040 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:46.151777+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 38862848 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:47.151918+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 38862848 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:48.152054+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 38862848 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:49.152205+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 38862848 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:50.152350+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 38862848 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:51.152477+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 38862848 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:52.152619+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 38862848 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:53.152815+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 38862848 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:54.152961+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 38862848 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:55.153080+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 38862848 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:56.153214+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 38862848 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:57.153322+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 38862848 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:58.153441+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 38862848 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:59.153569+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 38862848 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:00.153737+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 38854656 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:01.153870+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 38854656 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:02.153999+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 38854656 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:03.154129+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 38854656 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:04.154392+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 38854656 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:05.154547+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 38854656 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:06.154663+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 38854656 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:07.154991+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 38854656 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:08.155122+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 38854656 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:09.155245+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 38854656 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:10.155388+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 38854656 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:11.155556+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 38854656 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:12.155681+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 38854656 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:13.155789+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 38854656 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:14.155948+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 38854656 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:15.156108+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:16.156233+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:17.156838+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:18.156969+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:19.157102+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:20.157251+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:21.157376+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:22.157503+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:23.157644+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:24.157775+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:25.157896+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:26.158021+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:27.158145+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:28.158270+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 234881024 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:29.158369+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:30.158499+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:31.158634+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 38846464 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:32.158766+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 38838272 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:33.158916+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 38838272 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:34.159068+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 38838272 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:35.159220+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 38838272 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:36.159364+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 38838272 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:37.159560+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 38838272 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:38.159682+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 38838272 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:39.160265+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 38838272 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:40.160397+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 38838272 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:41.160559+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 38838272 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:42.160679+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 38838272 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:43.160833+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 38838272 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:44.160966+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 38838272 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:45.161127+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 38838272 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:46.161271+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:47.161499+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:48.161827+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:49.161957+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:50.162115+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:51.162263+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:52.162455+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:53.162609+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:54.162715+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:55.162863+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:56.163020+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:57.163170+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:58.163316+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:59.163457+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:00.163578+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:01.163746+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:02.163887+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:03.164049+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 38830080 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:04.164161+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 38821888 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:05.164268+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 38821888 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:06.164405+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 38821888 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:07.164550+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 38821888 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:08.165014+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 38821888 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:09.165129+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 38821888 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:10.165261+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 38821888 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:11.165387+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 38821888 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:12.165511+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:13.165636+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:14.165764+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:15.165884+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:16.166039+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:17.166180+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:18.166322+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:19.166441+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:20.166596+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:21.166724+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:22.166859+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:23.167010+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:24.167139+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:25.167309+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:26.167464+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:27.167622+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:28.167778+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:29.167906+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:30.168062+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 38813696 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:31.168221+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 38805504 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:32.168347+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 38805504 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:33.168500+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 38805504 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:34.168631+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 38805504 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:35.168754+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 38805504 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:36.168896+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 38805504 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:37.169031+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 38805504 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:38.169165+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 38805504 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:39.169305+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 38805504 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:40.169455+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 38805504 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:41.169579+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 38805504 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:42.169707+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 38805504 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:43.169873+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 38805504 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:44.170010+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 38797312 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:45.170173+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 38797312 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:46.170307+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 38797312 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:47.170440+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 38797312 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:48.170605+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 38797312 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:49.170714+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 38797312 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:50.170870+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 38797312 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:51.171030+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 38797312 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:52.171185+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 38789120 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:53.171373+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 38789120 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:54.171483+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 38789120 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:55.171587+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 38789120 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:56.171713+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 38789120 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:57.171828+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 38789120 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:58.171997+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 38789120 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:59.172112+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 38789120 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:00.172273+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 38789120 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:01.172382+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 38789120 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:02.172497+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 38789120 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:03.172606+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 38789120 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:04.172767+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 38789120 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:05.172925+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 38780928 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:06.173092+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 38780928 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:07.173228+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 38780928 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:08.173359+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 38780928 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:09.173500+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 38780928 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:10.173677+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 38780928 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:11.173826+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 38772736 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:12.173986+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 38772736 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:13.174171+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 38772736 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:14.174313+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:15.174496+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 38772736 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:16.174666+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 38772736 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:17.174798+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 38772736 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:18.174931+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 38772736 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:19.175082+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 38772736 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:20.175200+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 38772736 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:21.175310+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 38772736 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:22.175449+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 38772736 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:23.175616+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 38764544 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:24.175779+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 38764544 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:25.175913+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 38764544 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:26.176072+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 38764544 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:27.176220+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 38764544 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:28.176376+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 38764544 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:29.176496+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 38764544 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:30.176652+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 38764544 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:31.176801+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 38764544 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:32.176948+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 38764544 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:33.177128+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 38764544 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:34.177271+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 38764544 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:35.177437+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 38764544 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:36.177580+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 38756352 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:37.177729+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 38756352 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:38.177864+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 38756352 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:39.178015+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 38756352 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:40.178130+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 38756352 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:41.178249+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 38756352 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:42.178371+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 38756352 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:43.178557+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 38756352 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:44.178701+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 38756352 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:45.178841+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 38756352 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:46.178985+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 38756352 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:47.179101+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 38756352 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:48.179241+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 38756352 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:49.179390+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130957312 unmapped: 38748160 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:50.179552+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130957312 unmapped: 38748160 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:51.179689+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130957312 unmapped: 38748160 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:52.179843+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130957312 unmapped: 38748160 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:53.180027+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130957312 unmapped: 38748160 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:54.180603+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130965504 unmapped: 38739968 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:55.180764+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130965504 unmapped: 38739968 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:56.180910+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130965504 unmapped: 38739968 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:57.181051+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130965504 unmapped: 38739968 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:58.181197+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130965504 unmapped: 38739968 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:59.181367+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130965504 unmapped: 38739968 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:00.181473+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130965504 unmapped: 38739968 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:01.181601+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130965504 unmapped: 38739968 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:02.181722+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130965504 unmapped: 38739968 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:03.181880+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130965504 unmapped: 38739968 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:04.182009+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130965504 unmapped: 38739968 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:05.182157+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:06.182284+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:07.182484+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:08.182614+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:09.182760+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:10.182872+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:11.182966+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:12.183179+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:13.183464+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:14.183597+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:15.183723+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:16.183878+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:17.184022+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:18.184153+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:19.184323+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:20.184472+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 38731776 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:21.184634+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130981888 unmapped: 38723584 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:22.184765+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130981888 unmapped: 38723584 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:23.184939+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130981888 unmapped: 38723584 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:24.185101+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130981888 unmapped: 38723584 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:25.185218+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130981888 unmapped: 38723584 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:26.185338+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130981888 unmapped: 38723584 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:27.185438+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130981888 unmapped: 38723584 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:28.185555+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130981888 unmapped: 38723584 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:29.185712+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130981888 unmapped: 38723584 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:30.185830+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130981888 unmapped: 38723584 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:31.185977+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130981888 unmapped: 38723584 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:32.186128+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130981888 unmapped: 38723584 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:33.186313+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130990080 unmapped: 38715392 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:34.186436+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130990080 unmapped: 38715392 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:35.186585+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130990080 unmapped: 38715392 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:36.186720+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130990080 unmapped: 38715392 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:37.186874+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130990080 unmapped: 38715392 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:38.187028+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130990080 unmapped: 38715392 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:39.187174+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130990080 unmapped: 38715392 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:40.187323+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130990080 unmapped: 38715392 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:41.187484+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130990080 unmapped: 38715392 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:42.187628+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130990080 unmapped: 38715392 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:43.187836+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130990080 unmapped: 38715392 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:44.187982+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131006464 unmapped: 38699008 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:45.188132+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131006464 unmapped: 38699008 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:46.188316+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131006464 unmapped: 38699008 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:47.188461+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131006464 unmapped: 38699008 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:48.188677+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131006464 unmapped: 38699008 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:49.188863+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131006464 unmapped: 38699008 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:50.189021+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131006464 unmapped: 38699008 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:51.189165+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131006464 unmapped: 38699008 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:52.189310+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 38690816 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:53.189499+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 38690816 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:54.189666+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 38690816 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:55.189834+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 38690816 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:56.190002+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 38690816 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:57.190146+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 38690816 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:58.190330+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 38690816 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:59.190442+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 38690816 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:00.190576+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 38690816 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:01.190738+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 38690816 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:02.190859+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 38690816 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:03.191024+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 38690816 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:04.191197+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 38690816 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:05.191696+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 38690816 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:06.191846+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 38674432 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:07.191993+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 38674432 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:08.192125+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 38674432 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:09.192264+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 38674432 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:10.192436+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 38674432 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:11.192596+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 38674432 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:12.192815+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 38674432 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:13.193033+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 38674432 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:14.193232+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 38674432 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:15.193448+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 38674432 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:16.193596+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 38674432 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:17.193739+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 38674432 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:18.193880+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 38666240 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:19.194015+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 38666240 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:20.194160+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 38666240 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:21.194321+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 38666240 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:22.194472+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 38666240 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:23.194701+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 38666240 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:24.194978+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 38666240 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:25.195151+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 38666240 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:26.195332+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131047424 unmapped: 38658048 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:27.195531+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131047424 unmapped: 38658048 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:28.195774+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131047424 unmapped: 38658048 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:29.196012+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131047424 unmapped: 38658048 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:30.196249+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131047424 unmapped: 38658048 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:31.196457+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131055616 unmapped: 38649856 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:32.196635+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131055616 unmapped: 38649856 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:33.196905+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131055616 unmapped: 38649856 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:34.197140+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131055616 unmapped: 38649856 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:35.197344+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131055616 unmapped: 38649856 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:36.197500+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131055616 unmapped: 38649856 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:37.197631+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131055616 unmapped: 38649856 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:38.197828+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131055616 unmapped: 38649856 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:39.197994+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131055616 unmapped: 38649856 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:40.198221+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131055616 unmapped: 38649856 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:41.198437+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131055616 unmapped: 38649856 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:42.198792+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131063808 unmapped: 38641664 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:43.199012+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131063808 unmapped: 38641664 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:44.199223+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131063808 unmapped: 38641664 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:45.199498+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131063808 unmapped: 38641664 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:46.199698+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131063808 unmapped: 38641664 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:47.199856+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131063808 unmapped: 38641664 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:48.200015+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131063808 unmapped: 38641664 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:49.200199+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131063808 unmapped: 38641664 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:50.200381+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131063808 unmapped: 38641664 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:51.200584+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131063808 unmapped: 38641664 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:52.200808+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131063808 unmapped: 38641664 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:53.201105+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131072000 unmapped: 38633472 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:54.201285+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131072000 unmapped: 38633472 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:55.201535+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131072000 unmapped: 38633472 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:56.201735+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 38625280 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:57.201985+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 38625280 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:58.202151+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 38625280 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:59.203176+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 38625280 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:00.203363+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 38625280 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:01.203543+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 38625280 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:02.203705+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 38625280 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:03.203947+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 38625280 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:04.204494+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 38625280 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:05.204664+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 38625280 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:06.204764+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 38625280 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:07.204900+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 38625280 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:08.205851+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 38625280 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:09.206060+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 38625280 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:10.206262+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 38625280 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:11.206450+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 38617088 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:12.206591+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 38617088 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:13.206828+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 38617088 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:14.207009+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 38617088 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:15.207177+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 38617088 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:16.207360+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 38617088 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:17.207564+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 38617088 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:18.207738+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 38617088 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:19.207884+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 38617088 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:20.208021+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 38617088 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:21.208206+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131096576 unmapped: 38608896 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:22.208399+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131096576 unmapped: 38608896 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:23.208661+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131096576 unmapped: 38608896 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:24.209068+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131096576 unmapped: 38608896 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:25.209265+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131096576 unmapped: 38608896 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:26.209551+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131096576 unmapped: 38608896 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:27.209727+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131096576 unmapped: 38608896 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:28.209878+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131096576 unmapped: 38608896 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:29.210622+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131096576 unmapped: 38608896 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:30.210819+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131104768 unmapped: 38600704 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:31.211304+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131104768 unmapped: 38600704 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:32.211492+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131104768 unmapped: 38600704 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:33.211777+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131104768 unmapped: 38600704 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:34.211995+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131104768 unmapped: 38600704 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:35.212112+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131104768 unmapped: 38600704 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:36.212209+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131112960 unmapped: 38592512 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:37.212357+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131112960 unmapped: 38592512 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:38.212485+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131112960 unmapped: 38592512 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:39.212600+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:13 compute-1 ceph-osd[77354]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131112960 unmapped: 38592512 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325384 data_alloc: 218103808 data_used: 11096064
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:40.212727+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 131112960 unmapped: 38592512 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:41.212826+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'config diff' '{prefix=config diff}'
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'config show' '{prefix=config show}'
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 39157760 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:42.212926+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: prioritycache tune_memory target: 4294967296 mapped: 130646016 unmapped: 39059456 heap: 169705472 old mem: 2845415833 new mem: 2845415833
Nov 25 10:11:13 compute-1 ceph-osd[77354]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f93cb000/0x0/0x4ffc00000, data 0x1de7313/0x1e91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: tick
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_tickets
Nov 25 10:11:13 compute-1 ceph-osd[77354]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:43.213043+0000)
Nov 25 10:11:13 compute-1 ceph-osd[77354]: do_command 'log dump' '{prefix=log dump}'
Nov 25 10:11:13 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:13 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:13 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:13.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:13 compute-1 rsyslogd[963]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:11:14 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 10:11:14 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1093066099' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:14 compute-1 sudo[251691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:11:14 compute-1 sudo[251691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:14 compute-1 sudo[251691]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/595608251' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='client.28976 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='client.28801 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2477092599' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2482510223' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1422420507' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='client.29003 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='client.28819 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2056844208' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3558068328' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='client.28834 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='client.29018 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1093066099' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:14 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3373365784' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:11:14 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:14 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:14 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:14.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:14 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 25 10:11:14 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1907537029' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:14 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 25 10:11:14 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2503635589' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:11:14 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 25 10:11:14 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/866828572' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:11:14 compute-1 crontab[251910]: (root) LIST (root)
Nov 25 10:11:15 compute-1 nova_compute[228683]: 2025-11-25 10:11:15.112 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.29021 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1330753870' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.29039 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.29045 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.29048 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1907537029' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3730769747' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3024921306' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.19299 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.29075 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2503635589' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.28891 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: pgmap v1206: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.19323 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/866828572' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.29105 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2441670351' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 25 10:11:15 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/372716450' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:11:15 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 25 10:11:15 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1256456425' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:11:15 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:15 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:15 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:15.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.28927 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.19338 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3471746005' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.29132 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3914276966' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.19365 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.19371 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/372716450' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4089255639' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.29159 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.28990 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1256456425' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.19401 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1936966365' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.19407 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/94335548' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3996815968' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:11:16 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:16 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:16 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:16.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:16 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 25 10:11:16 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3907824213' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 25 10:11:16 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/13099019' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 10:11:16 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 25 10:11:16 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/34667073' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-1 podman[252190]: 2025-11-25 10:11:16.822609892 +0000 UTC m=+0.075691407 container health_status 8d3611c59f0a876dc61378591305b1a9c64c747306eb4abfd5873d991b682069 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 10:11:16 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 25 10:11:16 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2768433089' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 25 10:11:17 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1222381632' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 25 10:11:17 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2630515814' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.29020 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.19431 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3907824213' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/13099019' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2661222347' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.19455 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.29237 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/34667073' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3894594024' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/869887195' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1034596140' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: pgmap v1207: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.29074 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2768433089' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1222381632' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2982949518' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2615350652' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2630515814' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:11:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 25 10:11:17 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3066847444' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-1 nova_compute[228683]: 2025-11-25 10:11:17.395 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 25 10:11:17 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/915986928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:11:17 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 25 10:11:17 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2865881099' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-1 systemd[1]: Starting Hostname Service...
Nov 25 10:11:17 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:17 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:17 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:17.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:18 compute-1 systemd[1]: Started Hostname Service.
Nov 25 10:11:18 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 25 10:11:18 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1674075659' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 25 10:11:18 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2311045081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:11:18 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:18 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:18 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:18.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2257282358' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.19506 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3066847444' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3328015327' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1334765920' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/915986928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2740963012' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2865881099' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3855566086' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3776552969' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4252505220' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1708222253' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1674075659' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2311045081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3984037843' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/941136760' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4162767992' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/3748325728' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 25 10:11:18 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/627438796' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 25 10:11:18 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2856483859' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 25 10:11:18 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4030864340' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/627438796' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2856483859' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/898889878' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3558542597' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.29369 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2067234885' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.29387 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3364237' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/891235372' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3057625564' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: pgmap v1208: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.29405 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4030864340' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.19611 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2073664472' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3931407246' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/147863761' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:11:19 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 25 10:11:19 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/106983101' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:11:19 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:19 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:19 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:19.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:19 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 25 10:11:19 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1094097717' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:11:20 compute-1 nova_compute[228683]: 2025-11-25 10:11:20.113 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:20 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:20 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:20 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:20.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:20 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 25 10:11:20 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/526553241' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-1 ceph-mon[79643]: from='client.29242 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:20 compute-1 ceph-mon[79643]: from='client.29447 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-1 ceph-mon[79643]: from='client.29266 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:20 compute-1 ceph-mon[79643]: from='client.29272 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1208949389' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:11:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/106983101' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:11:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/921274166' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-1 ceph-mon[79643]: from='client.29474 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-1 ceph-mon[79643]: from='client.29296 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-1 ceph-mon[79643]: from='client.19677 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2504291124' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-1 ceph-mon[79643]: from='client.29501 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1094097717' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:11:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1644087196' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:11:20 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/526553241' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='client.29320 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='client.19704 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='client.29519 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='client.19698 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='client.29350 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='client.19719 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3552601814' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='client.29540 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2191149956' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='client.29374 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:21 compute-1 ceph-mon[79643]: pgmap v1209: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='client.19740 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2413847078' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1887929255' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:21 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/3774492437' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 25 10:11:21 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4115573552' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:21 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:21 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:21 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:21 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:21.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 25 10:11:22 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1685573185' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:11:22 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:22 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:22 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:22.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='client.29395 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='client.19779 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/4115573552' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='client.29428 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3483204903' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='client.19809 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='client.29606 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='client.29458 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/2533429017' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/4227443116' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1685573185' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4083160143' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 25 10:11:22 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1268220495' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 10:11:22 compute-1 nova_compute[228683]: 2025-11-25 10:11:22.398 228687 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:22 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:22 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:22 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Nov 25 10:11:22 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1199518046' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 10:11:23 compute-1 ceph-mon[79643]: from='client.19863 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:23 compute-1 ceph-mon[79643]: from='client.29651 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:23 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:23 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:23 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1268220495' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 10:11:23 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:23 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:23 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:23 compute-1 ceph-mon[79643]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:23 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/101408041' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:11:23 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1199518046' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 10:11:23 compute-1 ceph-mon[79643]: pgmap v1210: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:11:23 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3895785370' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:11:23 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/2843991185' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 10:11:23 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/300601334' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 10:11:23 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Nov 25 10:11:23 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4266040070' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 10:11:23 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:23 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:23 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:23.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:24 compute-1 ceph-mon[79643]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Nov 25 10:11:24 compute-1 ceph-mon[79643]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1551765300' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 10:11:24 compute-1 radosgw[81450]: ====== starting new request req=0x7fe80522a5d0 =====
Nov 25 10:11:24 compute-1 radosgw[81450]: ====== req done req=0x7fe80522a5d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:24 compute-1 radosgw[81450]: beast: 0x7fe80522a5d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:24.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:24 compute-1 ceph-mon[79643]: from='client.19932 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:24 compute-1 ceph-mon[79643]: from='client.29711 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:24 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/1860618778' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 10:11:24 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/4223628237' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:11:24 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/4266040070' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 10:11:24 compute-1 ceph-mon[79643]: from='client.? 192.168.122.102:0/2335435487' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 10:11:24 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/1988260548' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 10:11:24 compute-1 ceph-mon[79643]: from='client.? 192.168.122.101:0/1551765300' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 10:11:24 compute-1 ceph-mon[79643]: from='client.? 192.168.122.100:0/3553712323' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
